Humanoid Robots Wiki
wikidb
https://humanoids.wiki/w/Main_Page
MediaWiki 1.31.0
first-letter
Media
Special
Talk
User
User talk
Humanoid Robots Wiki
Humanoid Robots Wiki talk
File
File talk
MediaWiki
MediaWiki talk
Template
Template talk
Help
Help talk
Category
Category talk
Gadget
Gadget talk
Gadget definition
Gadget definition talk
Main Page
0
1
1
2018-07-06T01:27:54Z
MediaWiki default
0
wikitext
text/x-wiki
<strong>MediaWiki has been installed.</strong>
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User's Guide] for information on using the wiki software.
== Getting started ==
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]
5702e4d5fd9173246331a889294caf01a3ad3706
2
1
2018-07-07T00:15:06Z
199.87.196.213
0
wikitext
text/x-wiki
<strong>MediaWiki has been installed.</strong>
<syntaxhighlight lang=bash>
#!/bin/bash
</syntaxhighlight>
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User's Guide] for information on using the wiki software.
== Getting started ==
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]
4a8a61f37b0f1f2f01d7826a5c6559917443c54b
3
2
2018-07-07T00:15:35Z
199.87.196.213
0
wikitext
text/x-wiki
<strong>MediaWiki has been installed.</strong>
<syntaxhighlight lang=bash>
#Test of Syntax Highlight
</syntaxhighlight>
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User's Guide] for information on using the wiki software.
== Getting started ==
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]
4dac9928604cac8ef81cd55d34940567098a4806
4
3
2018-07-07T00:15:53Z
199.87.196.213
0
wikitext
text/x-wiki
<strong>MediaWiki has been installed.</strong>
<syntaxhighlight lang=bash>
# Test of Syntax Highlight
</syntaxhighlight>
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User's Guide] for information on using the wiki software.
== Getting started ==
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]
c2cbf3dc7cdb438197b1cc1afdb56ce496e4ed42
5
4
2024-04-23T20:12:59Z
Ben
2
Home page
wikitext
text/x-wiki
<h1>Humanoid Robot Wiki</h1>
Welcome to our wiki!
836a951866be43ca2fc601ebdbb4800916b8bce7
6
5
2024-04-23T20:13:10Z
Ben
2
wikitext
text/x-wiki
Welcome to our wiki!
6f8e6105ab29f72f6943ccdff9a27f733ace71bc
10
6
2024-04-23T23:24:03Z
Admin
1
wikitext
text/x-wiki
Welcome to the humanoid robots wiki!
2201b9727474e5af9b6abd848e8b0729efa1823d
11
10
2024-04-23T23:35:16Z
MattFreed
3
Add Hardware Link
wikitext
text/x-wiki
Welcome to the humanoid robots wiki!
<big>Navigation:</big>
* [[Hardware]]
8541f5b0c47db499dec9f8237202bd9cd7bc411a
16
11
2024-04-24T01:45:50Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
<big>Navigation:</big>
* [[Hardware]]
bff45cf8138e12849f826ceffc39ca4c66a96aaa
20
16
2024-04-24T01:48:18Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
=== Companies ===
- [[Unitree]]
- [[K-Scale Labs]]
ad53fe969f480bd8d8b8b4cb8b406054c7148298
22
20
2024-04-24T01:49:13Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
=== Companies ===
* [[Unitree]]
* [[K-Scale Labs]]
7da9dc5164c0448c55e025ca6f83cb31491d804b
23
22
2024-04-24T01:51:38Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
=== Getting Started ===
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Companies ===
* [[Tesla]]
* [[Agility]]
* [[Sanctuary]]
* [[1X]]
* [[Unitree]]
* [[K-Scale Labs]]
ee4e3c3d12fa4cce356f15b9ae54b4d0215f20a6
29
23
2024-04-24T01:55:01Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Companies ===
* [[Tesla]]
* [[Agility]]
* [[Sanctuary]]
* [[1X]]
* [[Unitree]]
* [[K-Scale Labs]]
b74711193adf859de54225190e3e126c3eb5118b
30
29
2024-04-24T01:57:12Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Companies ===
==== Robots ====
* [[Tesla]]
* [[Agility]]
* [[Sanctuary]]
* [[1X]]
* [[Unitree]]
* [[K-Scale Labs]]
==== Foundation Models ====
* [[Physical Intelligence]]
* [[Skild]]
042a104263722d2954a02d4a4fc566c1d7934a69
33
30
2024-04-24T01:59:45Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Companies ===
==== Robots ====
* [[Tesla]]
* [[Agility]]
* [[Sanctuary]]
* [[1X]]
* [[Unitree]]
* [[K-Scale Labs]]
==== Foundation Models ====
* [[Physical Intelligence]]
* [[Skild]]
=== Learning ===
* [[Underactuated Robotics]]
56cfd94b07f65246df1dafa198025aef6990771d
49
33
2024-04-24T02:07:54Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Companies ===
==== Robots ====
* [[Tesla]]
* [[Agility]]
* [[Sanctuary]]
* [[1X]]
* [[Fourier Intelligence]]
* [[Unitree]]
* [[K-Scale Labs]]
==== Foundation Models ====
* [[Physical Intelligence]]
* [[Skild]]
=== Learning ===
* [[Underactuated Robotics]]
4441a5e2a290521ca0d33bac770b52bac39385a7
50
49
2024-04-24T02:08:51Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Companies ===
==== Robots ====
* [[Tesla]]
* [[Agility]]
* [[Sanctuary]]
* [[1X]]
* [[Fourier Intelligence]]
* [[AGIBot]]
* [[Unitree]]
* [[UBTech]]
* [[Boston Dynamics]]
* [[Apptronik]]
* [[K-Scale Labs]]
==== Foundation Models ====
* [[Physical Intelligence]]
* [[Skild]]
=== Learning ===
* [[Underactuated Robotics]]
315a37b849144d01a767d17d33bbeb7eead0ed77
Stompy
0
2
7
2024-04-23T21:33:41Z
MattFreed
3
Initial Post
wikitext
text/x-wiki
= Hardware =
This page is dedicated to detailing the hardware selections for K-Scale labs humanoid robot, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
[[Category:Robotics]]
[[Category:Hardware Components]]
50e6a6d41a9f25139d6fb056099ecaf15449fc9c
8
7
2024-04-23T22:11:29Z
MattFreed
3
Actuator Information
wikitext
text/x-wiki
= Hardware =
This page is dedicated to detailing the hardware selections for K-Scale labs humanoid robot, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
The K-Scale Labs robot uses Quasi-direct drive pancake motors based off the MIT Cheetah open-source actuators. These actuators and developed, manufactured, and sold by MyActuator.
The K-Scale Labs robot consists of:
* 10x MyActuator RMD-X4-H
* 6x MyActuator RMD-X6
* 9x MyActuator RMD-X8
* 4x MyActuator RMX-X10
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
[[Category:Robotics]]
[[Category:Hardware Components]]
b6b03099a16463e8c68216e1c3c7d7f1130d051c
9
8
2024-04-23T22:21:19Z
MattFreed
3
spelling
wikitext
text/x-wiki
= Hardware =
This page is dedicated to detailing the hardware selections for K-Scale labs humanoid robot, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
The K-Scale Labs robot uses Quasi-direct drive pancake motors based off the MIT Cheetah open-source actuators. These actuators and developed, manufactured, and sold by MyActuator.
The K-Scale Labs robot consists of:
* 10x MyActuator RMD-X4-H
* 6x MyActuator RMD-X6
* 9x MyActuator RMD-X8
* 4x MyActuator RMX-X-10
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
[[Category:Robotics]]
[[Category:Hardware Components]]
55a15c51657f80bdbc3553032d7e0d7f9a0beac9
12
9
2024-04-23T23:38:37Z
MattFreed
3
spacing
wikitext
text/x-wiki
= Hardware =
This page is dedicated to detailing the hardware selections for K-Scale labs humanoid robot, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
The K-Scale Labs robot uses Quasi-direct drive pancake motors based off the MIT Cheetah open-source actuators. These actuators and developed, manufactured, and sold by MyActuator.
The K-Scale Labs robot consists of:
* 10x MyActuator RMD-X4-H
* 6x MyActuator RMD-X6
* 9x MyActuator RMD-X8
* 4x MyActuator RMX-X-10
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
[[Category:Robotics]]
[[Category:Hardware Components]]
856fdd819e1695b5bed76b440123c74bded7faf4
13
12
2024-04-24T01:31:54Z
Admin
1
wikitext
text/x-wiki
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
Many humanoid robots use Quasi-direct drive pancake motors based off the MIT Cheetah open-source actuators. These actuators and developed, manufactured, and sold by MyActuator.
The K-Scale Labs robot consists of:
* 10x MyActuator RMD-X4-H
* 6x MyActuator RMD-X6
* 9x MyActuator RMD-X8
* 4x MyActuator RMX-X-10
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
[[Category:Robotics]]
[[Category:Hardware Components]]
676b51065476d9febd5528276531169818feb1ef
14
13
2024-04-24T01:44:51Z
MattFreed
3
remove kscale direct information for now
wikitext
text/x-wiki
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
[[Category:Robotics]]
[[Category:Hardware Components]]
0de5719c371084d0f179818e8070e81b74dd382a
17
14
2024-04-24T01:46:20Z
Ben
2
Ben moved page [[Hardware]] to [[K-Scale Labs Hardware]]: It is K-Scale Labs specific
wikitext
text/x-wiki
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
[[Category:Robotics]]
[[Category:Hardware Components]]
0de5719c371084d0f179818e8070e81b74dd382a
45
17
2024-04-24T02:03:51Z
Ben
2
wikitext
text/x-wiki
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
[[Category:Robots]]
85133efd7e50150f4695e1b8df3aa48d96d46d4a
46
45
2024-04-24T02:04:01Z
Ben
2
Ben moved page [[K-Scale Labs Hardware]] to [[Stompy]]
wikitext
text/x-wiki
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
[[Category:Robots]]
85133efd7e50150f4695e1b8df3aa48d96d46d4a
H1
0
3
15
2024-04-24T01:45:18Z
Ben
2
Created page with "It is available for purchase [https://shop.unitree.com/products/unitree-h1 here]."
wikitext
text/x-wiki
It is available for purchase [https://shop.unitree.com/products/unitree-h1 here].
900095efde90381d578981a4184f54e668da07cf
Hardware
0
4
18
2024-04-24T01:46:20Z
Ben
2
Ben moved page [[Hardware]] to [[K-Scale Labs Hardware]]: It is K-Scale Labs specific
wikitext
text/x-wiki
#REDIRECT [[K-Scale Labs Hardware]]
53f4d1ebab3f12033dccb67237c2d445b0fecb21
K-Scale Labs
0
5
19
2024-04-24T01:47:48Z
Ben
2
Created page with "K-Scale Labs is building an open-source humanoid robot called Stompy. Their website is [https://kscale.dev/ here]."
wikitext
text/x-wiki
K-Scale Labs is building an open-source humanoid robot called Stompy.
Their website is [https://kscale.dev/ here].
40e0942e7db5e0ffc0af846735224cba3dedbce6
28
19
2024-04-24T01:54:51Z
Ben
2
wikitext
text/x-wiki
[https://kscale.dev/ K-Scale Labs] is building an open-source humanoid robot called Stompy.
d0533f3cfb47d26fa8f05607edc876d2ff943b0c
35
28
2024-04-24T02:00:37Z
Ben
2
wikitext
text/x-wiki
[https://kscale.dev/ K-Scale Labs] is building an open-source humanoid robot called [[K-Scale Labs Hardware|Stompy]].
31bc672d493af7c0dca35c856987fb252ab85c46
36
35
2024-04-24T02:01:26Z
Ben
2
wikitext
text/x-wiki
[https://kscale.dev/ K-Scale Labs] is building an open-source humanoid robot called [[K-Scale Labs Hardware|Stompy]].
[[Category:Companies]]
deae7c6fe0f7d7e5c7fd765fef883a4723a77ea5
48
36
2024-04-24T02:04:15Z
Ben
2
wikitext
text/x-wiki
[https://kscale.dev/ K-Scale Labs] is building an open-source humanoid robot called [[Stompy]].
[[Category:Companies]]
d180a3ab7af23ea495ff3e0923e9916efdda6a84
Unitree
0
6
21
2024-04-24T01:49:03Z
Ben
2
Created page with "Unitree is a company based out of China which has built a number of different types of robots. === Robots === * [[Unitree H1]]"
wikitext
text/x-wiki
Unitree is a company based out of China which has built a number of different types of robots.
=== Robots ===
* [[Unitree H1]]
ab34503c057ce402cafbe936b3b66b1cc1663672
37
21
2024-04-24T02:01:37Z
Ben
2
wikitext
text/x-wiki
Unitree is a company based out of China which has built a number of different types of robots.
=== Robots ===
* [[Unitree H1]]
[[Category:Companies]]
c5679d0cacd2419150766e0bd1dd696257e2a543
Tesla
0
7
24
2024-04-24T01:51:50Z
Ben
2
Created page with "Tesla is building a humanoid robot called Optimus."
wikitext
text/x-wiki
Tesla is building a humanoid robot called Optimus.
47566dbb56714ed62fadd552286ecdf5cd1461ed
41
24
2024-04-24T02:02:03Z
Ben
2
wikitext
text/x-wiki
Tesla is building a humanoid robot called Optimus.
[[Category:Companies]]
b41b0bd49949661e997e6ba50f698a3fa688aaa1
Agility
0
8
25
2024-04-24T01:52:31Z
Ben
2
Created page with "Agility has built several robots. Their humanoid robot is called Digit."
wikitext
text/x-wiki
Agility has built several robots. Their humanoid robot is called Digit.
b7e714a16ee5fb0e2fb64cf4fabf27734e6fed8d
40
25
2024-04-24T02:01:58Z
Ben
2
wikitext
text/x-wiki
Agility has built several robots. Their humanoid robot is called Digit.
[[Category:Companies]]
a2902ca85db51df97724bd28f08f427a6e25d0ae
Sanctuary
0
9
26
2024-04-24T01:53:25Z
Ben
2
Created page with "[https://sanctuary.ai/ Sanctuary AI] is a humanoid robot company. Their robot is called Phoenix."
wikitext
text/x-wiki
[https://sanctuary.ai/ Sanctuary AI] is a humanoid robot company. Their robot is called Phoenix.
9e4a36100b2f4b58649f89c323f0d5a3caf47e36
39
26
2024-04-24T02:01:51Z
Ben
2
wikitext
text/x-wiki
[https://sanctuary.ai/ Sanctuary AI] is a humanoid robot company. Their robot is called Phoenix.
[[Category:Companies]]
e8604c484200d8b58bd8921b26df01fa6974d4a6
1X
0
10
27
2024-04-24T01:54:29Z
Ben
2
Created page with "[https://www.1x.tech/ 1X] (formerly known as Halodi) is the best humanoid robot company. They have two robots: EVE and NEO. EVE is a wheeled robot while NEO has legs."
wikitext
text/x-wiki
[https://www.1x.tech/ 1X] (formerly known as Halodi) is the best humanoid robot company. They have two robots: EVE and NEO. EVE is a wheeled robot while NEO has legs.
f536513b93f4310ecdf1505345534b7523e06a07
38
27
2024-04-24T02:01:45Z
Ben
2
wikitext
text/x-wiki
[https://www.1x.tech/ 1X] (formerly known as Halodi) is the best humanoid robot company. They have two robots: EVE and NEO. EVE is a wheeled robot while NEO has legs.
[[Category:Companies]]
2206882797e51be1b841dde3d4f754333fa1c205
Skild
0
11
31
2024-04-24T01:58:05Z
Ben
2
Created page with "Skild is a stealth foundation model startup started by two faculty members from Carnegie Mellon University. === Articles === * [https://www.theinformation.com/articles/ventu..."
wikitext
text/x-wiki
Skild is a stealth foundation model startup started by two faculty members from Carnegie Mellon University.
=== Articles ===
* [https://www.theinformation.com/articles/venture-fomo-hits-robotics-as-young-startup-gets-1-5-billion-valuation Venture FOMO Hits Robotics as Young Startup Gets $1.5 Billion Valuation]
71894f4fc42406204190ec634243c20c700491be
44
31
2024-04-24T02:03:22Z
Ben
2
wikitext
text/x-wiki
Skild is a stealth foundation model startup started by two faculty members from Carnegie Mellon University.
=== Articles ===
* [https://www.theinformation.com/articles/venture-fomo-hits-robotics-as-young-startup-gets-1-5-billion-valuation Venture FOMO Hits Robotics as Young Startup Gets $1.5 Billion Valuation]
[[Category:Companies]]
02b55281177a97c01027d3a3f0b832d1fa403c4d
Physical Intelligence
0
12
32
2024-04-24T01:58:44Z
Ben
2
Created page with "[https://physicalintelligence.company/ Physical Intelligence] is a company based in the Bay Area which is building foundation models for embodied AI."
wikitext
text/x-wiki
[https://physicalintelligence.company/ Physical Intelligence] is a company based in the Bay Area which is building foundation models for embodied AI.
65657d80e72433bdfc4f25656dfe24154690824a
43
32
2024-04-24T02:02:54Z
Ben
2
wikitext
text/x-wiki
[https://physicalintelligence.company/ Physical Intelligence] is a company based in the Bay Area which is building foundation models for embodied AI.
[[Category:Companies]]
cb8cec216f584d2700508bd0e832b7c064b4654a
Underactuated Robotics
0
13
34
2024-04-24T02:00:11Z
Ben
2
Created page with "This is a course taught by Russ Tedrake at MIT."
wikitext
text/x-wiki
This is a course taught by Russ Tedrake at MIT.
1a007d96944837db3fbfa2a82f8c34680a013ed6
Category:Companies
14
14
42
2024-04-24T02:02:31Z
Ben
2
Created page with "This category is for companies building humanoid robots."
wikitext
text/x-wiki
This category is for companies building humanoid robots.
6074c836c244d3c7c2860678d324d2f31522adb9
K-Scale Labs Hardware
0
15
47
2024-04-24T02:04:01Z
Ben
2
Ben moved page [[K-Scale Labs Hardware]] to [[Stompy]]
wikitext
text/x-wiki
#REDIRECT [[Stompy]]
edf35715d7c99162d2c3854e57ff75dd4cf77d72
K-Scale Cluster
0
16
51
2024-04-24T02:30:13Z
Ben
2
Created page with "The K-Scale Labs cluster is a shared cluster for robotics research. This page contains notes on how to access the cluster. To get onboarded, you should send us the public key..."
wikitext
text/x-wiki
The K-Scale Labs cluster is a shared cluster for robotics research. This page contains notes on how to access the cluster.
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly:
<syntaxhighlight>
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/is_rsa
</syntaxhighlight>
945034cd4748e3fd2e078b3f1d6fdb50e24e6625
52
51
2024-04-24T02:33:08Z
Ben
2
wikitext
text/x-wiki
The K-Scale Labs cluster is a shared cluster for robotics research. This page contains notes on how to access the cluster.
=== Onboarding ===
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly:
<syntaxhighlight>
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/is_rsa
</syntaxhighlight>
Please inform us if you have any issues.
=== Notes ===
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the `/ephemeral` directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
f7db11976c42bca986f14192ecbc4a01d37e11c5
53
52
2024-04-24T02:33:24Z
Ben
2
wikitext
text/x-wiki
The K-Scale Labs cluster is a shared cluster for robotics research. This page contains notes on how to access the cluster.
=== Onboarding ===
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight>
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly:
<syntaxhighlight>
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/is_rsa
</syntaxhighlight>
Please inform us if you have any issues.
=== Notes ===
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the `/ephemeral` directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
427efcb0ef3deccc8e29da0273398c324c2aa4d3
54
53
2024-04-24T02:33:53Z
Ben
2
wikitext
text/x-wiki
The K-Scale Labs cluster is a shared cluster for robotics research. This page contains notes on how to access the cluster.
=== Onboarding ===
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/is_rsa
</syntaxhighlight>
Please inform us if you have any issues.
=== Notes ===
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the `/ephemeral` directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
e4bd67c8320b0b135a64925a336efd704144f509
Humanoid Robots Wiki:About
4
17
55
2024-04-24T02:52:32Z
Ben
2
Created page with "The Humanoid Robots Wiki is a free public wiki containing general information about humanoid robots. It is maintained by [https://kscale.dev/ K-Scale Labs]."
wikitext
text/x-wiki
The Humanoid Robots Wiki is a free public wiki containing general information about humanoid robots. It is maintained by [https://kscale.dev/ K-Scale Labs].
ce6f28ed550a747d87c969bc3a10eddf85809b47
Isaac Sim
0
18
56
2024-04-24T02:58:52Z
Ben
2
Created page with "Isaac Sim is a simulator from Nvidia based on Omniverse. === Doing Simple Operations === '''Start Isaac Sim''' * Open Omniverse Launcher * Navigate to the Library * Under..."
wikitext
text/x-wiki
Isaac Sim is a simulator from Nvidia based on Omniverse.
=== Doing Simple Operations ===
'''Start Isaac Sim'''
* Open Omniverse Launcher
* Navigate to the Library
* Under “Apps” click “Isaac Sim”
* Click “Launch”
* There are multiple options for launching. Choose the normal one to show the GUI or headless if streaming.
* Choose <code>File > Open...</code> and select the <code>.usd</code> model corresponding to the robot you want to simulate.
'''Connecting streaming client'''
* Start Isaac Sim in Headless (Native) mode
* Open Omniverse Streaming Client
* Connect to the server
[[Category: Simulators]]
551ba3322c3eabc314b546f01ea69381ffd7e22a
Category:Simulators
14
19
57
2024-04-24T02:59:12Z
Ben
2
Created page with "Category for various simulators that you can use"
wikitext
text/x-wiki
Category for various simulators that you can use
bd18a7545f0c9a81b86239006203a510d0ede019
CAN/IMU/Cameras with Jetson Orin
0
20
58
2024-04-24T03:07:54Z
Ben
2
Created page with "The Jetson Orin is a development board from Nvidia. === CAN Bus === See [https://docs.nvidia.com/jetson/archives/r34.1/DeveloperGuide/text/HR/ControllerAreaNetworkCan.html h..."
wikitext
text/x-wiki
The Jetson Orin is a development board from Nvidia.
=== CAN Bus ===
See [https://docs.nvidia.com/jetson/archives/r34.1/DeveloperGuide/text/HR/ControllerAreaNetworkCan.html here] for notes on configuring the CAN bus for the Jetson.
Install dependencies:
<syntaxhighlight lang="bash">
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt upgrade
sudo apt install g++ python3.11-dev
</syntaxhighlight>
Initialize the CAN bus on startup:
<syntaxhighlight lang="bash">
#!/bin/bash
# Set pinmux.
busybox devmem 0x0c303000 32 0x0000C400
busybox devmem 0x0c303008 32 0x0000C458
busybox devmem 0x0c303010 32 0x0000C400
busybox devmem 0x0c303018 32 0x0000C458
# Install modules.
modprobe can
modprobe can_raw
modprobe mttcan
# Turn off CAN.
ip link set down can0
ip link set down can1
# Set parameters.
ip link set can0 type can bitrate 1000000 dbitrate 1000000 berr-reporting on fd on loopback off
ip link set can1 type can bitrate 1000000 dbitrate 1000000 berr-reporting on fd on loopback off
# Turn on CAN.
ip link set up can0
ip link set up can1
</syntaxhighlight>
You can run this script automatically on startup by writing a service configuration to (for example) <code>/etc/systemd/system/can_setup.service</code>
<syntaxhighlight lang="text">
[Unit]
Description=Initialize CAN Interfaces
After=network.target
[Service]
Type=oneshot
ExecStart=/opt/kscale/enable_can.sh
RemainAfterExit=true
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
To enable this, run:
<syntaxhighlight lang="bash">
sudo systemctl enable can_setup
sudo systemctl start can_setup
</syntaxhighlight>
=== Cameras ===
==== Arducam IMX 219 ====
* [https://www.arducam.com/product/arducam-imx219-multi-camera-kit-for-the-nvidia-jetson-agx-orin/ Product Page]
* Shipping was pretty fast
* Order a couple backup cameras because a couple of the cameras that they shipped came busted
* [https://docs.arducam.com/Nvidia-Jetson-Camera/Nvidia-Jetson-Orin-Series/NVIDIA-Jetson-AGX-Orin/Quick-Start-Guide/ Quick start guide]
Run the installation script:
<syntaxhighlight lang="bash">
wget https://github.com/ArduCAM/MIPI_Camera/releases/download/v0.0.3/install_full.sh
chmod u+x install_full.sh
./install_full.sh -m imx219
</syntaxhighlight>
Supported kernel versions (see releases [https://github.com/ArduCAM/MIPI_Camera/releases here]):
* <code>5.10.104-tegra-35.3.1</code>
* <code>5.10.120-tegra-35.4.1</code>
Install an older kernel from [https://developer.nvidia.com/embedded/jetson-linux-archive here]. This required downgrading to Ubuntu 20.04 (only changing <code>/etc/os-version</code>).
Install dependencies:
<syntaxhighlight lang="bash">
sudo apt update
sudo apt install \
gstreamer1.0-tools \
gstreamer1.0-alsa \
gstreamer1.0-plugins-base \
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad \
gstreamer1.0-plugins-ugly \
gstreamer1.0-libav
sudo apt install
libgstreamer1.0-dev \
libgstreamer-plugins-base1.0-dev \
libgstreamer-plugins-good1.0-dev \
libgstreamer-plugins-bad1.0-dev
sudo apt install \
v4l-utils \
ffmpeg
</syntaxhighlight>
Make sure the camera shows up:
<syntaxhighlight lang="bash">
v4l2-ctl --list-formats-ext
</syntaxhighlight>
Capture a frame from the camera:
<syntaxhighlight lang="bash">
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! "video/x-raw(memory:NVMM), width=1280, height=720, framerate=60/1" ! nvvidconv ! jpegenc snapshot=TRUE ! filesink location=test.jpg
</syntaxhighlight>
Alternatively, use the following Python code:
<syntaxhighlight lang="bash">
import cv2
gst_str = (
'nvarguscamerasrc sensor-id=0 ! '
'video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)60/1 ! '
'nvvidconv flip-method=0 ! '
'video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! '
'videoconvert ! '
'video/x-raw, format=(string)BGR ! '
'appsink'
)
cap = cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)
while True:
ret, frame = cap.read()
if ret:
print(frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
cap.release()
cv2.destroyAllWindows()
</syntaxhighlight>
=== IMU ===
We're using the [https://ozzmaker.com/product/berryimu-accelerometer-gyroscope-magnetometer-barometricaltitude-sensor/ BerryIMU v3]. To use it, connect pin 3 on the Jetson to SDA and pin 5 to SCL for I2C bus 7. You can verify the connection is successful if the following command matches:
<syntaxhighlight lang="bash">
$ sudo i2cdetect -y -r 7
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- 77
</syntaxhighlight>
The equivalent command on the Raspberry Pi should use bus 1:
<syntaxhighlight lang="bash">
sudo i2cdetect -y -r 1
</syntaxhighlight>
The default addresses are:
* <code>0x6A</code>: Gyroscope and accelerometer
* <code>0x1C</code>: Magnetometer
* <code>0x77</code>: Barometer
3c2dda225f11070061ff5eefce5fdbf5d68061f9
59
58
2024-04-24T03:10:01Z
Ben
2
/* Arducam IMX 219 */
wikitext
text/x-wiki
The Jetson Orin is a development board from Nvidia.
=== CAN Bus ===
See [https://docs.nvidia.com/jetson/archives/r34.1/DeveloperGuide/text/HR/ControllerAreaNetworkCan.html here] for notes on configuring the CAN bus for the Jetson.
Install dependencies:
<syntaxhighlight lang="bash">
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt upgrade
sudo apt install g++ python3.11-dev
</syntaxhighlight>
Initialize the CAN bus on startup:
<syntaxhighlight lang="bash">
#!/bin/bash
# Set pinmux.
busybox devmem 0x0c303000 32 0x0000C400
busybox devmem 0x0c303008 32 0x0000C458
busybox devmem 0x0c303010 32 0x0000C400
busybox devmem 0x0c303018 32 0x0000C458
# Install modules.
modprobe can
modprobe can_raw
modprobe mttcan
# Turn off CAN.
ip link set down can0
ip link set down can1
# Set parameters.
ip link set can0 type can bitrate 1000000 dbitrate 1000000 berr-reporting on fd on loopback off
ip link set can1 type can bitrate 1000000 dbitrate 1000000 berr-reporting on fd on loopback off
# Turn on CAN.
ip link set up can0
ip link set up can1
</syntaxhighlight>
You can run this script automatically on startup by writing a service configuration to (for example) <code>/etc/systemd/system/can_setup.service</code>
<syntaxhighlight lang="text">
[Unit]
Description=Initialize CAN Interfaces
After=network.target
[Service]
Type=oneshot
ExecStart=/opt/kscale/enable_can.sh
RemainAfterExit=true
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
To enable this, run:
<syntaxhighlight lang="bash">
sudo systemctl enable can_setup
sudo systemctl start can_setup
</syntaxhighlight>
=== Cameras ===
==== Arducam IMX 219 ====
* [https://www.arducam.com/product/arducam-imx219-multi-camera-kit-for-the-nvidia-jetson-agx-orin/ Product Page]
** Shipping was pretty fast
** Order a couple backup cameras because a couple of the cameras that they shipped came busted
* [https://docs.arducam.com/Nvidia-Jetson-Camera/Nvidia-Jetson-Orin-Series/NVIDIA-Jetson-AGX-Orin/Quick-Start-Guide/ Quick start guide]
Run the installation script:
<syntaxhighlight lang="bash">
wget https://github.com/ArduCAM/MIPI_Camera/releases/download/v0.0.3/install_full.sh
chmod u+x install_full.sh
./install_full.sh -m imx219
</syntaxhighlight>
Supported kernel versions (see releases [https://github.com/ArduCAM/MIPI_Camera/releases here]):
* <code>5.10.104-tegra-35.3.1</code>
* <code>5.10.120-tegra-35.4.1</code>
Install an older kernel from [https://developer.nvidia.com/embedded/jetson-linux-archive here]. This required downgrading to Ubuntu 20.04 (only changing <code>/etc/os-version</code>).
Install dependencies:
<syntaxhighlight lang="bash">
sudo apt update
sudo apt install \
gstreamer1.0-tools \
gstreamer1.0-alsa \
gstreamer1.0-plugins-base \
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad \
gstreamer1.0-plugins-ugly \
gstreamer1.0-libav
sudo apt install
libgstreamer1.0-dev \
libgstreamer-plugins-base1.0-dev \
libgstreamer-plugins-good1.0-dev \
libgstreamer-plugins-bad1.0-dev
sudo apt install \
v4l-utils \
ffmpeg
</syntaxhighlight>
Make sure the camera shows up:
<syntaxhighlight lang="bash">
v4l2-ctl --list-formats-ext
</syntaxhighlight>
Capture a frame from the camera:
<syntaxhighlight lang="bash">
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! "video/x-raw(memory:NVMM), width=1280, height=720, framerate=60/1" ! nvvidconv ! jpegenc snapshot=TRUE ! filesink location=test.jpg
</syntaxhighlight>
Alternatively, use the following Python code:
<syntaxhighlight lang="bash">
import cv2
gst_str = (
'nvarguscamerasrc sensor-id=0 ! '
'video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)60/1 ! '
'nvvidconv flip-method=0 ! '
'video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! '
'videoconvert ! '
'video/x-raw, format=(string)BGR ! '
'appsink'
)
cap = cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)
while True:
ret, frame = cap.read()
if ret:
print(frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
cap.release()
cv2.destroyAllWindows()
</syntaxhighlight>
=== IMU ===
We're using the [https://ozzmaker.com/product/berryimu-accelerometer-gyroscope-magnetometer-barometricaltitude-sensor/ BerryIMU v3]. To use it, connect pin 3 on the Jetson to SDA and pin 5 to SCL for I2C bus 7. You can verify the connection is successful if the following command matches:
<syntaxhighlight lang="bash">
$ sudo i2cdetect -y -r 7
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- 77
</syntaxhighlight>
The equivalent command on the Raspberry Pi should use bus 1:
<syntaxhighlight lang="bash">
sudo i2cdetect -y -r 1
</syntaxhighlight>
The default addresses are:
* <code>0x6A</code>: Gyroscope and accelerometer
* <code>0x1C</code>: Magnetometer
* <code>0x77</code>: Barometer
7383dadc24c397a0fc8b98ee3286da5213c2d35c
Main Page
0
1
60
50
2024-04-24T03:32:41Z
92.107.67.203
0
/* Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Companies ===
==== Robots ====
* [[Optimus]]
* [[Agility]]
* [[Sanctuary]]
* [[1X]]
* [[Fourier Intelligence]]
* [[AGIBot]]
* [[Unitree]]
* [[UBTech]]
* [[Boston Dynamics]]
* [[Apptronik]]
* [[K-Scale Labs]]
==== Foundation Models ====
* [[Physical Intelligence]]
* [[Skild]]
=== Learning ===
* [[Underactuated Robotics]]
f70c52c93acd7cefbe73d74a535910ac83a03f48
61
60
2024-04-24T03:33:21Z
67.194.230.174
0
/* Learning */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Companies ===
==== Robots ====
* [[Optimus]]
* [[Agility]]
* [[Sanctuary]]
* [[1X]]
* [[Fourier Intelligence]]
* [[AGIBot]]
* [[Unitree]]
* [[UBTech]]
* [[Boston Dynamics]]
* [[Apptronik]]
* [[K-Scale Labs]]
==== Foundation Models ====
* [[Physical Intelligence]]
* [[Skild]]
=== Learning ===
* [[Underactuated Robotics]]
* [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
618df699325fccee847a02377d9d13f43c52a055
62
61
2024-04-24T03:33:23Z
92.107.67.203
0
/* Companies */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Companies ===
* [[Tesla]]
==== Robots ====
* [[Optimus]]
* [[Agility]]
* [[Sanctuary]]
* [[1X]]
* [[Fourier Intelligence]]
* [[AGIBot]]
* [[Unitree]]
* [[UBTech]]
* [[Boston Dynamics]]
* [[Apptronik]]
* [[K-Scale Labs]]
==== Foundation Models ====
* [[Physical Intelligence]]
* [[Skild]]
=== Learning ===
* [[Underactuated Robotics]]
* [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
28f71b8edd069d183f2c657f8502e6df350c3748
66
62
2024-04-24T03:56:08Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Companies ===
* [[Tesla]]
* [[K-Scale Labs]]
* [[Agility]]
* [[Sanctuary]]
* [[1X]]
* [[Fourier Intelligence]]
* [[AGIBot]]
* [[Unitree]]
* [[UBTech]]
* [[Boston Dynamics]]
* [[Apptronik]]
==== Robots ====
* [[Optimus]]
* [[Unitree H1]]
* [[Neo]]
* [[Eve]]
* [[Stompy]]
==== Foundation Models ====
* [[Physical Intelligence]]
* [[Skild]]
=== Learning ===
* [[Underactuated Robotics]]
* [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
08aa494327cb0a5746ce6aee42ea18954f7d75e6
67
66
2024-04-24T03:56:15Z
Ben
2
/* Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Companies ===
* [[Tesla]]
* [[K-Scale Labs]]
* [[Agility]]
* [[Sanctuary]]
* [[1X]]
* [[Fourier Intelligence]]
* [[AGIBot]]
* [[Unitree]]
* [[UBTech]]
* [[Boston Dynamics]]
* [[Apptronik]]
==== Robots ====
* [[Optimus]]
* [[H1]]
* [[Neo]]
* [[Eve]]
* [[Stompy]]
==== Foundation Models ====
* [[Physical Intelligence]]
* [[Skild]]
=== Learning ===
* [[Underactuated Robotics]]
* [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
6f2242e4a681e01cfe54cc392fa93356938ecc00
68
67
2024-04-24T04:00:16Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
|
|-
| [[Sanctuary]]
|
|-
| [[1X]]
| [[EVE]], [[NEO]]
|-
| [[Fourier Intelligence]]
|
|-
| [[Unitree]]
| [[Unitree H1]]
|-
| [[UBTech]]
|
|-
| [[Boston Dynamics]]
| [[Atlas]]
|-
| [[Apptronik]]
|
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
=== Learning ===
* [[Underactuated Robotics]]
* [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
92bbc2f383bb9bc022f6e5b9d976c97e5d80fe8a
69
68
2024-04-24T04:02:35Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
|
|-
| [[Sanctuary]]
|
|-
| [[1X]]
| [[EVE]], [[NEO]]
|-
| [[Fourier Intelligence]]
|
|-
| [[Unitree]]
| [[Unitree H1]]
|-
| [[UBTech]]
|
|-
| [[Boston Dynamics]]
| [[Atlas]]
|-
| [[Apptronik]]
|
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
59f6537df964aca7ba0aa870aaca342885d15abe
70
69
2024-04-24T04:07:22Z
76.144.71.131
0
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[DigitV3], [Digit]]
|
|-
| [[Sanctuary]]
|
|-
| [[1X]]
| [[EVE]], [[NEO]]
|-
| [[Fourier Intelligence]]
|
|-
| [[Unitree]]
| [[Unitree H1]]
|-
| [[UBTech]]
|
|-
| [[Boston Dynamics]]
| [[Atlas]]
|-
| [[Apptronik]]
|
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
29aabc00229d86fc9577af5b99de2951c5d6f5a7
71
70
2024-04-24T04:07:39Z
76.144.71.131
0
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[DigitV3]], [[Digit]]
|
|-
| [[Sanctuary]]
|
|-
| [[1X]]
| [[EVE]], [[NEO]]
|-
| [[Fourier Intelligence]]
|
|-
| [[Unitree]]
| [[Unitree H1]]
|-
| [[UBTech]]
|
|-
| [[Boston Dynamics]]
| [[Atlas]]
|-
| [[Apptronik]]
|
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
5e544a353e56719d0258e928406ef477dc66998d
72
71
2024-04-24T04:08:03Z
76.144.71.131
0
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[DigitV3]], [[Digit]]
|-
| [[Sanctuary]]
|
|-
| [[1X]]
| [[EVE]], [[NEO]]
|-
| [[Fourier Intelligence]]
|
|-
| [[Unitree]]
| [[Unitree H1]]
|-
| [[UBTech]]
|
|-
| [[Boston Dynamics]]
| [[Atlas]]
|-
| [[Apptronik]]
|
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
406e775c8d26bee782751732b54e55b70e8264fa
73
72
2024-04-24T04:11:14Z
76.144.71.131
0
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[DigitV3]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[1X]]
| [[EVE]], [[NEO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[Unitree H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
418c6beda54489deea97092a203f01e389757d4f
77
73
2024-04-24T04:16:04Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[DigitV3]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[1X]]
| [[EVE]], [[NEO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
bdffb97a65fde89194f2771b67861bb26b8662aa
78
77
2024-04-24T04:19:44Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[DigitV3]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[1X]]
| [[EVE]], [[NEO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
72be79b91b2579f2c44587261a43d18b5f534486
80
78
2024-04-24T04:21:51Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[DigitV3]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[1X]]
| [[EVE]], [[NEO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Era Robotics]]
| [[XBot-L]], [[XBot-S]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
5e0a77ade68bcef5f2b385c697cb306ef699adf2
81
80
2024-04-24T04:22:02Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[DigitV3]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[1X]]
| [[EVE]], [[NEO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Era Robotics]]
| [[XBot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
6fc7f12cf7bf6efed3741753a06f36fbf48e5270
82
81
2024-04-24T04:26:19Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[DigitV3]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[1X]]
| [[EVE]], [[NEO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Era Robotics]]
| [[XBot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
01fd23f8b4684298e0e6679be5792cf48137ce0e
83
82
2024-04-24T04:30:38Z
172.56.152.64
0
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[DigitV3]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[EVE]], [[NEO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Era Robotics]]
| [[XBot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
d25f9d513123395775976810013e0e7865801797
96
83
2024-04-24T04:50:24Z
76.144.71.131
0
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[DigitV3]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[EVE]], [[NEO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Era Robotics]]
| [[XBot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]],[[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
5c2d700f58628e341ca655b59a338a50973c9d76
97
96
2024-04-24T04:50:43Z
76.144.71.131
0
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[DigitV3]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[EVE]], [[NEO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Era Robotics]]
| [[XBot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
2008cf132cfd5d97e9ba68f8fe699cd5efd44d35
Underactuated Robotics
0
13
63
34
2024-04-24T03:36:12Z
Sagar
4
Link to the course website.
wikitext
text/x-wiki
This is a course taught by Russ Tedrake at MIT.
https://underactuated.csail.mit.edu/
a5f5b589eedfafe7c56b8fcb50c08bcb9710bfcd
Project Aria
0
21
64
2024-04-24T03:54:46Z
Ben
2
Created page with "AR glasses for data capture from Meta. === References === ==== Links ==== * [https://www.projectaria.com/ Website] * [https://docs.ego-exo4d-data.org/ Ego4D Documentation]..."
wikitext
text/x-wiki
AR glasses for data capture from Meta.
=== References ===
==== Links ====
* [https://www.projectaria.com/ Website]
* [https://docs.ego-exo4d-data.org/ Ego4D Documentation]
* [https://facebookresearch.github.io/projectaria_tools/docs/intro Project Aria Documentation]
** [https://facebookresearch.github.io/projectaria_tools/docs/data_formats/mps/mps_trajectory Specific page regarding trajectories]
==== Datasets ====
* [https://www.projectaria.com/datasets/apd/ APD Dataset]
811d22fc32ee1c2e3a74958f9b447a7b83c40ef8
Optimus
0
22
65
2024-04-24T03:55:18Z
Ben
2
Created page with "The humanoid robot from Tesla. [[Category:Robots]]"
wikitext
text/x-wiki
The humanoid robot from Tesla.
[[Category:Robots]]
0887b16263546ec0d33e5f7497481233a726ddc1
1X
0
10
74
38
2024-04-24T04:15:29Z
76.144.71.131
0
wikitext
text/x-wiki
[https://www.1x.tech/ 1X] (formerly known as Halodi Robotics) is a humanoid robotics company based in Moss, Norway. They have two robots: EVE and NEO. EVE is a wheeled robot while NEO has legs. The company is known for it's high torque BLDC motors that they developed in house. Those BLDC motors are paired with low gear ratio cable drives. EVE and NEO are designed for safe human interaction by reducing actuator inertia.
[[Category:Companies]]
6bd3830bc47b96572ede840ac193cc7a832c6611
H1
0
3
75
15
2024-04-24T04:15:52Z
Ben
2
Ben moved page [[Unitree H1]] to [[H1]]
wikitext
text/x-wiki
It is available for purchase [https://shop.unitree.com/products/unitree-h1 here].
900095efde90381d578981a4184f54e668da07cf
Unitree H1
0
23
76
2024-04-24T04:15:52Z
Ben
2
Ben moved page [[Unitree H1]] to [[H1]]
wikitext
text/x-wiki
#REDIRECT [[H1]]
1abd28e18815dc95839514830b8b3c4cc11ceef6
Cassie
0
24
79
2024-04-24T04:21:43Z
76.144.71.131
0
Created page with " Cassie is a bipedal robot designed by Oregon State University and licensed and built by Agility Robotics. ==Development=="
wikitext
text/x-wiki
Cassie is a bipedal robot designed by Oregon State University and licensed and built by Agility Robotics.
==Development==
95fef9ac2606c58a82094bf7a30d09311af26bdd
84
79
2024-04-24T04:31:28Z
76.144.71.131
0
wikitext
text/x-wiki
Cassie is a bipedal robot designed by Oregon State University and licensed and built by Agility Robotics.
==Specifications==
{| class="wikitable"
|-
| Height || 115 cm
|-
| Weight|| 31 kg
|-
| Speed || >4 m/s (8.95 mph)
|-
| Payload || Example
|-
| Battery Life || ~5 hours
|-
| Battery Capacity || 1 kWh
|-
| DoF || 10 (5 per leg)
|-
| Cost || $250,000
|-
| Number Made || ~12
|-
| Status || Retired
|}
==Development==
==World Record==
41764868683af6d4cc1bc0dbf92702c13203a12d
89
84
2024-04-24T04:44:38Z
76.144.71.131
0
wikitext
text/x-wiki
Cassie is a bipedal robot designed by Oregon State University and licensed and built by Agility Robotics.
==Specifications==
{| class="wikitable"
|-
| Height || 115 cm
|-
| Weight|| 31 kg
|-
| Speed || >4 m/s (8.95 mph)
|-
| Payload || Example
|-
| Battery Life || ~5 hours
|-
| Battery Capacity || 1 kWh
|-
| DoF || 10 (5 per leg)
|-
| Cost || $250,000
|-
| Number Made || ~12
|-
| Status || Retired
|}
==Development==
On February 9, 2017 Cassie was unveiled at an event at Oregon State University featuring a live demo. A YouTube video was also posted to both the OSU YouTube page and Agility Robotics YouTube page.
On September 5th, 2017 University of Michigan received the first Cassie which they named "Cassie Blue." They would later receive "Cassie Yellow."
==Firsts and World Record==
Oregon State University's DRAIL lab used a reinforcement learned model to have Cassie run a 100M dash in 24.73 seconds. The Guinness World Record required a standing start and finish, so Cassie averaged a speed of over 4 m/s. For comparison, at the time the Guinness World Record for fastest running humanoid is ASIMO with a speed of 2.5 m/s which has to be done indoors on a special leveled floor. The run was done outside at the Whyte Track and Field Center on Oregon State's campus. The record was announced in September of 2022, but the actual run was 6 months prior. Cassie and the OSU DRAIL lab have an entire page dedicated to them in the 2024 Guinness World Records book. Footage of the run can be seen [https://www.youtube.com/watch?v=DdojWYOK0Nc here].
77c6d661dbaff15a99c517f5d8b5cc61cc742dd5
Template:Infobox company
10
25
85
2024-04-24T04:36:50Z
Ben
2
Created page with "<includeonly>{{Infobox | titlestyle = padding-bottom:0.25em | title = {{If empty |{{{name|}}} |{{{shorttitle|}}} |{{PAGENAMEBASE}} }} | headerstyle = background:#bbddff | la..."
wikitext
text/x-wiki
<includeonly>{{Infobox
| titlestyle = padding-bottom:0.25em
| title = {{If empty |{{{name|}}} |{{{shorttitle|}}} |{{PAGENAMEBASE}} }}
| headerstyle = background:#bbddff
| labelstyle = padding-top:0.245em;line-height:1.15em;padding-right:0.65em
| datastyle = text-align:left;line-height:1.3em
| autoheaders = y
| label1 = Legal Name
| data1 = {{{legal_name|}}}
| label2 = Country
| data2 = {{{country|}}}
}}</includeonly>
20553278eaca4b3ec4b13f68444fc675357d21f7
91
85
2024-04-24T04:46:56Z
Admin
1
wikitext
text/x-wiki
<includeonly>
<table class="infobox" style="width: 300px; font-size: 90%;">
<tr><th colspan="2" style="text-align: center; background-color: #cccccc;">{{{name|Infobox}}}</th></tr>
<tr>{{{content}}}</tr>
</table>
</includeonly>
<noinclude>
This is the template for a basic infobox. [[Category:Templates]]
</noinclude>
b3f9201ace7e3b9f58308ad3ad7153272d574e0c
92
91
2024-04-24T04:48:13Z
Admin
1
wikitext
text/x-wiki
<includeonly>
<table class="infobox" style="width: 300px; font-size: 90%;">
<tr><th colspan="2" style="text-align: center; background-color: #cccccc;">{{{name|Infobox}}}</th></tr>
<tr><td>{{{legal_name}}}</td></tr>
<tr><td>{{{robots}}}</td></tr>
</table>
</includeonly>
<noinclude>
This is the template for a basic infobox. [[Category:Templates]]
</noinclude>
3cc7fdfc0c7d3cfe77f1e80b76152d4f9b5edf87
94
92
2024-04-24T04:49:14Z
Admin
1
wikitext
text/x-wiki
<includeonly>
<table class="infobox" style="width: 300px; font-size: 90%;">
<tr><th colspan="2" style="text-align: center; background-color: #cccccc;">{{{name|Infobox}}}</th></tr>
<tr><td>{{{legal_name}}}</td></tr>
<tr><td>{{{country}}}</td></tr>
<tr><td>{{{robots}}}</td></tr>
</table>
</includeonly>
<noinclude>
This is the template for a basic infobox. [[Category:Templates]]
</noinclude>
0f52a825209243299bf1c124c714997a35041f9e
98
94
2024-04-24T04:51:28Z
Admin
1
wikitext
text/x-wiki
<includeonly><table class="infobox" style="width: 300px; font-size: 90%;">
<tr><th colspan="2" style="text-align: center; background-color: #cccccc;">{{{name|Infobox}}}</th></tr>
<tr><td>{{{legal_name}}}</td></tr>
<tr><td>{{{country}}}</td></tr>
<tr><td>{{{robots}}}</td></tr>
</table></includeonly>
<noinclude>
This is the template for a basic infobox. [[Category:Templates]]
</noinclude>
7417fe1483e2208ad946d92b963d263a2f2b6148
Tesla
0
7
86
41
2024-04-24T04:37:02Z
Ben
2
wikitext
text/x-wiki
{{infobox company
| legal_name = Tesla, Inc.
| country = United States
}}
Tesla is building a humanoid robot called Optimus.
[[Category:Companies]]
cdd4bd231071a526b5864e5095c2955b83dc917a
90
86
2024-04-24T04:45:57Z
Admin
1
wikitext
text/x-wiki
{{infobox
| name = Tesla, Inc.
| content = United States
}}
Tesla is building a humanoid robot called Optimus.
[[Category:Companies]]
378f21a0d043b7d9bc2f2548e3ed395a91799e4f
93
90
2024-04-24T04:48:57Z
Admin
1
wikitext
text/x-wiki
{{infobox
| name = Tesla
| legal_name = Tesla, Inc.
| country = United States
}}
Tesla is building a humanoid robot called Optimus.
[[Category:Companies]]
568833acd44f8fdbf14c9cfe6d375db41881e0c4
95
93
2024-04-24T04:49:27Z
Admin
1
wikitext
text/x-wiki
{{infobox company
| name = Tesla
| legal_name = Tesla, Inc.
| country = United States
| robots = [[Optimus]]
}}
Tesla is building a humanoid robot called Optimus.
[[Category:Companies]]
4000c6e462c12c25bed24bd2773822570704c918
99
95
2024-04-24T04:51:39Z
Admin
1
wikitext
text/x-wiki
{{infobox company
| name = Tesla
| legal_name = Tesla, Inc.
| country = United States
| robots = [[Optimus]]
}}
Tesla is building a humanoid robot [[Optimus]].
[[Category:Companies]]
d7a1a1c9a0cb01ab77d5a3c95645b62a49c9afc2
MediaWiki:Common.css
8
27
88
2024-04-24T04:43:29Z
Admin
1
Created page with "/* CSS placed here will be applied to all skins */ .infobox { border: 1px solid #aaaaaa; background-color: #f9f9f9; padding: 5px; width: 300px; /* or any othe..."
css
text/css
/* CSS placed here will be applied to all skins */
.infobox {
border: 1px solid #aaaaaa;
background-color: #f9f9f9;
padding: 5px;
width: 300px; /* or any other width */
}
.infobox th {
background-color: #e0e0e0;
text-align: center;
}
.infobox td {
padding: 2px 5px;
}
caa9d3efdd5b6e6240c159cca1491176282aa468
Template:Infobox robot
10
28
100
2024-04-24T04:54:35Z
Admin
1
Created page with "<includeonly><table class="infobox" style="width: 300px; font-size: 90%;"> <tr><th colspan="2" style="text-align: center; background-color: #cccccc;">{{{name}}}</th></tr> <tr>..."
wikitext
text/x-wiki
<includeonly><table class="infobox" style="width: 300px; font-size: 90%;">
<tr><th colspan="2" style="text-align: center; background-color: #cccccc;">{{{name}}}</th></tr>
<tr><td>{{{company}}}</td></tr>
<tr><td>{{{height}}}</td></tr>
<tr><td>{{{weight}}}</td></tr>
<tr><td>{{{single_hand_payload}}}</td></tr>
<tr><td>{{{two_hand_payload}}}</td></tr>
<tr><td>{{{cost}}}</td></tr>
</table></includeonly>
<noinclude>
This is the template for a basic infobox. [[Category:Templates]]
</noinclude>
0bcfbc3b04b0614bbf224da73c057cab969de81f
101
100
2024-04-24T04:55:10Z
Admin
1
wikitext
text/x-wiki
<includeonly><table class="infobox" style="width: 300px; font-size: 90%;">
<tr><th colspan="2" style="text-align: center; background-color: #cccccc;">{{{name}}}</th></tr>
<tr><td>{{{organization}}}</td></tr>
<tr><td>{{{height}}}</td></tr>
<tr><td>{{{weight}}}</td></tr>
<tr><td>{{{single_hand_payload}}}</td></tr>
<tr><td>{{{two_hand_payload}}}</td></tr>
<tr><td>{{{cost}}}</td></tr>
</table></includeonly>
<noinclude>
This is the template for a basic infobox. [[Category:Templates]]
</noinclude>
e13df0ffdf3cc2f2ca183847408c06d3ce3ff6e5
Optimus
0
22
102
65
2024-04-24T04:55:27Z
Admin
1
wikitext
text/x-wiki
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
}}
The humanoid robot from Tesla.
[[Category:Robots]]
07c310e4aa39cda84ad4ce3cc810fc0c24a7cd63
103
102
2024-04-24T04:58:51Z
Admin
1
wikitext
text/x-wiki
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
}}
The humanoid robot from [[Tesla]].
[[Category:Robots]]
49bda9acc845a832bea425b1381eb3eeb3cc279b
108
103
2024-04-24T05:14:00Z
Admin
1
wikitext
text/x-wiki
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| video = https://www.youtube.com/watch?v=cpraXaw7dyc
}}
The humanoid robot from [[Tesla]].
[[Category:Robots]]
aee31315ff860dfa597ff905681816578e199bb5
110
108
2024-04-24T05:16:54Z
Admin
1
wikitext
text/x-wiki
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| video = https://www.youtube.com/watch?v=cpraXaw7dyc
}}
Optimus is a humanoid robot from [[Tesla]].
[[Category:Robots]]
653e328581e8a58e1692ee34f1bb0bd3fc120038
111
110
2024-04-24T05:17:43Z
Admin
1
wikitext
text/x-wiki
Optimus is a humanoid robot from [[Tesla]].
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| video = https://www.youtube.com/watch?v=cpraXaw7dyc
}}
[[Category:Robots]]
18d8db29eee0ad16cd78cc79ae7e2ddbd64224a4
113
111
2024-04-24T05:18:49Z
Admin
1
wikitext
text/x-wiki
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| video = https://www.youtube.com/watch?v=cpraXaw7dyc
}}
Optimus is a humanoid robot from [[Tesla]].
[[Category:Robots]]
653e328581e8a58e1692ee34f1bb0bd3fc120038
119
113
2024-04-24T05:23:46Z
Admin
1
wikitext
text/x-wiki
Optimus is a humanoid robot from [[Tesla]].
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| video = https://www.youtube.com/watch?v=cpraXaw7dyc
}}
[[Category:Robots]]
18d8db29eee0ad16cd78cc79ae7e2ddbd64224a4
125
119
2024-04-24T06:32:59Z
Mrroboto
5
wikitext
text/x-wiki
Optimus is a humanoid robot from [[Tesla]].
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| video = https://www.youtube.com/watch?v=cpraXaw7dyc
}}
[[File:Optimus Tesla.jpg|thumb|The Optimus robot from Tesla]]
[[Category:Robots]]
ef4acfeea1a71412d92323c84f3db70ca706f2c2
126
125
2024-04-24T06:34:44Z
Mrroboto
5
wikitext
text/x-wiki
Optimus is a humanoid robot from [[Tesla]].
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| video = https://www.youtube.com/watch?v=cpraXaw7dyc
}}
[[Category:Robots]]
18d8db29eee0ad16cd78cc79ae7e2ddbd64224a4
129
126
2024-04-24T06:37:23Z
Mrroboto
5
wikitext
text/x-wiki
Optimus is a humanoid robot from [[Tesla]].
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| video = https://www.youtube.com/watch?v=cpraXaw7dyc
}}
[[File:Optimus Tesla (1).jpg|thumb]]
[[Category:Robots]]
08153e0cb7951922620fdef5b32073bfd293dc31
133
129
2024-04-24T06:55:29Z
Mrroboto
5
wikitext
text/x-wiki
Optimus is a humanoid robot from [[Tesla]].
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| video = https://www.youtube.com/watch?v=cpraXaw7dyc
}}
[[File:Optimus Tesla (1).jpg|none|300px|The Tesla Optimus on display|thumb]]
[[Category:Robots]]
01c380222326d68e1ceb000aab6e6484211ed5b1
135
133
2024-04-24T07:02:22Z
Mrroboto
5
wikitext
text/x-wiki
Optimus is a humanoid robot from [[Tesla]].
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| video = https://www.youtube.com/watch?v=cpraXaw7dyc
| purchase_link = Rumored
}}
[[File:Optimus Tesla (1).jpg|none|300px|The Tesla Optimus on display|thumb]]
[[Category:Robots]]
a66c1514cc28f472cbaf2b1cf58ac4348b9ed72c
137
135
2024-04-24T07:03:36Z
Mrroboto
5
wikitext
text/x-wiki
Optimus is a humanoid robot from [[Tesla]].
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| video = https://www.youtube.com/watch?v=cpraXaw7dyc
| purchase_link = Rumored 2025
}}
[[File:Optimus Tesla (1).jpg|none|300px|The Tesla Optimus on display|thumb]]
[[Category:Robots]]
29a8b1ffb43136e8ce13e9e6bb533b551e2146a3
140
137
2024-04-24T07:04:18Z
Mrroboto
5
wikitext
text/x-wiki
Optimus is a humanoid robot from [[Tesla]].
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| video = https://www.youtube.com/watch?v=cpraXaw7dyc
| purchase_link = Rumored, 2025
}}
[[File:Optimus Tesla (1).jpg|none|300px|The Tesla Optimus on display|thumb]]
[[Category:Robots]]
2728ad91f8489a5010db108d676fa2eeb0301548
144
140
2024-04-24T07:06:20Z
Mrroboto
5
wikitext
text/x-wiki
Optimus is a humanoid robot from [[Tesla]].
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| video = https://www.youtube.com/watch?v=cpraXaw7dyc
| purchase_link = Rumored 2025
}}
[[File:Optimus Tesla (1).jpg|none|300px|The Tesla Optimus on display|thumb]]
[[Category:Robots]]
29a8b1ffb43136e8ce13e9e6bb533b551e2146a3
150
144
2024-04-24T07:14:37Z
Mrroboto
5
wikitext
text/x-wiki
Optimus is a humanoid robot from [[Tesla]].
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| video = https://www.youtube.com/watch?v=cpraXaw7dyc
}}
[[File:Optimus Tesla (1).jpg|none|300px|The Tesla Optimus on display|thumb]]
[[Category:Robots]]
01c380222326d68e1ceb000aab6e6484211ed5b1
Category:Robots
14
29
104
2024-04-24T04:59:26Z
Admin
1
Created page with "Category for specific humanoid robot implementations."
wikitext
text/x-wiki
Category for specific humanoid robot implementations.
f2dd3ff9b9d3216304e77dd99677a66b91b3a8bc
Category:Templates
14
30
105
2024-04-24T04:59:53Z
Admin
1
Created page with "Category for templates that are used in various places."
wikitext
text/x-wiki
Category for templates that are used in various places.
47f74cd9e844eb53ef5683662fd622a553b56ba6
Template:Infobox robot
10
28
106
101
2024-04-24T05:12:02Z
Admin
1
wikitext
text/x-wiki
<includeonly><table class="infobox" style="width: 300px; font-size: 90%;">
<tr><th colspan="2" style="text-align: center; background-color: #cccccc;">{{{name}}}</th></tr>
{{#if: {{{organization|}}} | <tr><th>Organization</th><td>{{{organization}}}</td></tr> }}
{{#if: {{{height|}}} | <tr><th>Height</th><td>{{{height}}}</td></tr> }}
{{#if: {{{weight|}}} | <tr><th>Weight</th><td>{{{weight}}}</td></tr> }}
{{#if: {{{single_hand_payload|}}} | <tr><th>Single Hand Payload</th><td>{{{single_hand_payload}}}</td></tr> }}
{{#if: {{{two_hand_payload|}}} | <tr><th>Two Hand Payload</th><td>{{{two_hand_payload}}}</td></tr> }}
{{#if: {{{cost|}}} | <tr><th>Cost</th><td>{{{cost}}}</td></tr> }}
</table></includeonly>
<noinclude>
This is the template for a basic infobox. [[Category:Templates]]
</noinclude>
cb414c4239fecb4c16e44d0d08b7c0c4c11c6a45
107
106
2024-04-24T05:13:54Z
Admin
1
wikitext
text/x-wiki
<includeonly><table class="infobox" style="width: 300px; font-size: 90%;">
<tr><th colspan="2" style="text-align: center; background-color: #cccccc;">{{{name}}}</th></tr>
{{#if: {{{organization|}}} | <tr><th>Organization</th><td>{{{organization}}}</td></tr> }}
{{#if: {{{height|}}} | <tr><th>Height</th><td>{{{height}}}</td></tr> }}
{{#if: {{{weight|}}} | <tr><th>Weight</th><td>{{{weight}}}</td></tr> }}
{{#if: {{{single_hand_payload|}}} | <tr><th>Single Hand Payload</th><td>{{{single_hand_payload}}}</td></tr> }}
{{#if: {{{two_hand_payload|}}} | <tr><th>Two Hand Payload</th><td>{{{two_hand_payload}}}</td></tr> }}
{{#if: {{{cost|}}} | <tr><th>Cost</th><td>{{{cost}}}</td></tr> }}
{{#if: {{{video|}}} | <tr><th>Video</th><td>{{{video}}}</td></tr> }}
</table></includeonly>
<noinclude>
This is the template for a basic infobox. [[Category:Templates]]
</noinclude>
1a9eda5c3d90bf0bbab5563696c5f6de4cd93875
112
107
2024-04-24T05:18:34Z
Admin
1
wikitext
text/x-wiki
<includeonly><table class="infobox" style="width: 300px; font-size: 90%;"><tr><th colspan="2" style="text-align: center; background-color: #cccccc;">{{{name}}}</th></tr>{{#if: {{{organization|}}} | <tr><th>Organization</th><td>{{{organization}}}</td></tr> }}{{#if: {{{height|}}} | <tr><th>Height</th><td>{{{height}}}</td></tr> }}{{#if: {{{weight|}}} | <tr><th>Weight</th><td>{{{weight}}}</td></tr> }}{{#if: {{{single_hand_payload|}}} | <tr><th>Single Hand Payload</th><td>{{{single_hand_payload}}}</td></tr> }}{{#if: {{{two_hand_payload|}}} | <tr><th>Two Hand Payload</th><td>{{{two_hand_payload}}}</td></tr> }}{{#if: {{{cost|}}} | <tr><th>Cost</th><td>{{{cost}}}</td></tr> }}{{#if: {{{video|}}} | <tr><th>Video</th><td>{{{video}}}</td></tr> }}</table></includeonly><noinclude>This is the template for a basic infobox. [[Category:Templates]]</noinclude>
856023026e283d367074329b9a713883adc140b3
114
112
2024-04-24T05:20:20Z
Admin
1
wikitext
text/x-wiki
<includeonly><table class="infobox" style="width: 300px; font-size: 90%;"><tr><th colspan="2" style="text-align: center; background-color: #cccccc;">{{{name}}}</th></tr>{{#if: {{{organization|}}} | <tr><th>Organization</th><td>{{{organization}}}</td></tr> }}{{#if: {{{height|}}} | <tr><th>Height</th><td>{{{height}}}</td></tr> }}{{#if: {{{weight|}}} | <tr><th>Weight</th><td>{{{weight}}}</td></tr> }}{{#if: {{{single_hand_payload|}}} | <tr><th>Single Hand Payload</th><td>{{{single_hand_payload}}}</td></tr> }}{{#if: {{{two_hand_payload|}}} | <tr><th>Two Hand Payload</th><td>{{{two_hand_payload}}}</td></tr> }}{{#if: {{{cost|}}} | <tr><th>Cost</th><td>{{{cost}}}</td></tr> }}{{#if: {{{video|}}} | <tr><th>Video</th><td>{{{video}}}</td></tr> }}</table></includeonly><noinclude>This is the template for a basic infobox.
Fields are:
* <code>name</code>
* <code>organization</code>
* <code>height</code>
* <code>weight</code>
* <code>single_hand_payload</code>
* <code>two_hand_payload</code>
* <code>cost</code>
* <code>video</code>
[[Category:Templates]]</noinclude>
10bb3d1356be6b0fec5bff320448ec3d877b9268
116
114
2024-04-24T05:22:34Z
Admin
1
wikitext
text/x-wiki
<includeonly><table class="infobox" style="width: 300px; font-size: 90%;"><tr><th colspan="2" style="text-align: center; background-color: #cccccc;">{{{name}}}</th></tr>{{#if: {{{organization|}}} | <tr><th>Organization</th><td>{{{organization}}}</td></tr> }}{{#if: {{{height|}}} | <tr><th>Height</th><td>{{{height}}}</td></tr> }}{{#if: {{{weight|}}} | <tr><th>Weight</th><td>{{{weight}}}</td></tr> }}{{#if: {{{single_hand_payload|}}} | <tr><th>Single Hand Payload</th><td>{{{single_hand_payload}}}</td></tr> }}{{#if: {{{two_hand_payload|}}} | <tr><th>Two Hand Payload</th><td>{{{two_hand_payload}}}</td></tr> }}{{#if: {{{cost|}}} | <tr><th>Cost</th><td>{{{cost}}}</td></tr> }}{{#if: {{{video|}}} | <tr><th>Video</th><td>{{{video}}}</td></tr> }}</table></includeonly><noinclude>This is the template for the robot infobox.
Fields are:
* <code>name</code>
* <code>organization</code>
* <code>height</code>
* <code>weight</code>
* <code>single_hand_payload</code>
* <code>two_hand_payload</code>
* <code>cost</code>
* <code>video</code>
[[Category:Templates]]</noinclude>
66c5e2ab61fc431d8afb87d57343038f813ca717
134
116
2024-04-24T07:01:52Z
Mrroboto
5
wikitext
text/x-wiki
<includeonly><table class="infobox" style="width: 300px; font-size: 90%;"><tr><th colspan="2" style="text-align: center; background-color: #cccccc;">{{{name}}}</th></tr>{{#if: {{{organization|}}} | <tr><th>Organization</th><td>{{{organization}}}</td></tr> }}{{#if: {{{height|}}} | <tr><th>Height</th><td>{{{height}}}</td></tr> }}{{#if: {{{weight|}}} | <tr><th>Weight</th><td>{{{weight}}}</td></tr> }}{{#if: {{{single_hand_payload|}}} | <tr><th>Single Hand Payload</th><td>{{{single_hand_payload}}}</td></tr> }}{{#if: {{{two_hand_payload|}}} | <tr><th>Two Hand Payload</th><td>{{{two_hand_payload}}}</td></tr> }}{{#if: {{{cost|}}} | <tr><th>Cost</th><td>{{{cost}}}</td></tr> }}{{#if: {{{video|}}} | <tr><th>Video</th><td>{{{video}}}</td></tr> }}{{#if: {{{purchase_link|}}} | <tr><th>Purchase Link</th><td>{{{purchase_link}}}</td></tr> }}</table></includeonly><noinclude>This is the template for the robot infobox.
Fields are:
* <code>name</code>
* <code>organization</code>
* <code>height</code>
* <code>weight</code>
* <code>single_hand_payload</code>
* <code>two_hand_payload</code>
* <code>cost</code>
* <code>video</code>
* <code>purchase_link</code>
[[Category:Templates]]</noinclude>
b5a0532191b1ca99b8c06d91e9745d183048c1fd
136
134
2024-04-24T07:03:25Z
Mrroboto
5
wikitext
text/x-wiki
<includeonly>
<table class="infobox" style="width: 300px; font-size: 90%;">
<tr><th colspan="2" style="text-align: center; background-color: #cccccc;">{{{name}}}</th></tr
{{#if: {{{organization|}}} | <tr><th>Organization</th><td>{{{organization}}}</td></tr> }}
{{#if: {{{height|}}} | <tr><th>Height</th><td>{{{height}}}</td></tr> }}
{{#if: {{{weight|}}} | <tr><th>Weight</th><td>{{{weight}}}</td></tr> }}
{{#if: {{{single_hand_payload|}}} | <tr><th>Single Hand Payload</th><td>{{{single_hand_payload}}}</td></tr> }}
{{#if: {{{two_hand_payload|}}} | <tr><th>Two Hand Payload</th><td>{{{two_hand_payload}}}</td></tr> }}
{{#if: {{{cost|}}} | <tr><th>Cost</th><td>{{{cost}}}</td></tr> }}
{{#if: {{{video|}}} | <tr><th>Video</th><td>{{{video}}}</td></tr> }}
{{#if: {{{purchase_link|}}} | <tr><th>Purchase Link</th><td>{{{purchase_link}}}</td></tr> }}
</table>
</includeonly><noinclude>This is the template for the robot infobox.
Fields are:
* <code>name</code>
* <code>organization</code>
* <code>height</code>
* <code>weight</code>
* <code>single_hand_payload</code>
* <code>two_hand_payload</code>
* <code>cost</code>
* <code>video</code>
* <code>purchase_link</code>
[[Category:Templates]]</noinclude>
2714fd9559872125052b747da6323e188dc464b6
138
136
2024-04-24T07:03:49Z
Mrroboto
5
wikitext
text/x-wiki
<includeonly>
<table class="infobox" style="width: 300px; font-size: 90%;">
<tr><th colspan="2" style="text-align: center; background-color: #cccccc;">{{{name}}}</th></tr>
{{#if: {{{organization|}}} | <tr><th>Organization</th><td>{{{organization}}}</td></tr> }}
{{#if: {{{height|}}} | <tr><th>Height</th><td>{{{height}}}</td></tr> }}
{{#if: {{{weight|}}} | <tr><th>Weight</th><td>{{{weight}}}</td></tr> }}
{{#if: {{{single_hand_payload|}}} | <tr><th>Single Hand Payload</th><td>{{{single_hand_payload}}}</td></tr> }}
{{#if: {{{two_hand_payload|}}} | <tr><th>Two Hand Payload</th><td>{{{two_hand_payload}}}</td></tr> }}
{{#if: {{{cost|}}} | <tr><th>Cost</th><td>{{{cost}}}</td></tr> }}
{{#if: {{{video|}}} | <tr><th>Video</th><td>{{{video}}}</td></tr> }}
{{#if: {{{purchase_link|}}} | <tr><th>Purchase Link</th><td>{{{purchase_link}}}</td></tr> }}
</table>
</includeonly><noinclude>This is the template for the robot infobox.
Fields are:
* <code>name</code>
* <code>organization</code>
* <code>height</code>
* <code>weight</code>
* <code>single_hand_payload</code>
* <code>two_hand_payload</code>
* <code>cost</code>
* <code>video</code>
* <code>purchase_link</code>
[[Category:Templates]]</noinclude>
1118d0ed157ba26842b4942040cfdb1d9ef5ae8e
139
138
2024-04-24T07:04:10Z
Mrroboto
5
wikitext
text/x-wiki
<includeonly><table class="infobox" style="width: 300px; font-size: 90%;">
<tr><th colspan="2" style="text-align: center; background-color: #cccccc;">{{{name}}}</th></tr>
{{#if: {{{organization|}}} | <tr><th>Organization</th><td>{{{organization}}}</td></tr> }}
{{#if: {{{height|}}} | <tr><th>Height</th><td>{{{height}}}</td></tr> }}
{{#if: {{{weight|}}} | <tr><th>Weight</th><td>{{{weight}}}</td></tr> }}
{{#if: {{{single_hand_payload|}}} | <tr><th>Single Hand Payload</th><td>{{{single_hand_payload}}}</td></tr> }}
{{#if: {{{two_hand_payload|}}} | <tr><th>Two Hand Payload</th><td>{{{two_hand_payload}}}</td></tr> }}
{{#if: {{{cost|}}} | <tr><th>Cost</th><td>{{{cost}}}</td></tr> }}
{{#if: {{{video|}}} | <tr><th>Video</th><td>{{{video}}}</td></tr> }}
{{#if: {{{purchase_link|}}} | <tr><th>Purchase Link</th><td>{{{purchase_link}}}</td></tr> }}
</table></includeonly><noinclude>This is the template for the robot infobox.
Fields are:
* <code>name</code>
* <code>organization</code>
* <code>height</code>
* <code>weight</code>
* <code>single_hand_payload</code>
* <code>two_hand_payload</code>
* <code>cost</code>
* <code>video</code>
* <code>purchase_link</code>
[[Category:Templates]]</noinclude>
0e3d0588acc6bd9392db7d8bd73df0ad8843549f
143
139
2024-04-24T07:06:14Z
Mrroboto
5
wikitext
text/x-wiki
<includeonly><table class="infobox">
<tr><th colspan="2">{{{name}}}</th></tr>
{{#if: {{{organization|}}} | <tr><th>Organization</th><td>{{{organization}}}</td></tr> }}
{{#if: {{{height|}}} | <tr><th>Height</th><td>{{{height}}}</td></tr> }}
{{#if: {{{weight|}}} | <tr><th>Weight</th><td>{{{weight}}}</td></tr> }}
{{#if: {{{single_hand_payload|}}} | <tr><th>Single Hand Payload</th><td>{{{single_hand_payload}}}</td></tr> }}
{{#if: {{{two_hand_payload|}}} | <tr><th>Two Hand Payload</th><td>{{{two_hand_payload}}}</td></tr> }}
{{#if: {{{cost|}}} | <tr><th>Cost</th><td>{{{cost}}}</td></tr> }}
{{#if: {{{video|}}} | <tr><th>Video</th><td>{{{video}}}</td></tr> }}
{{#if: {{{purchase_link|}}} | <tr><th>Purchase Link</th><td>{{{purchase_link}}}</td></tr> }}
</table></includeonly><noinclude>This is the template for the robot infobox.
Fields are:
* <code>name</code>
* <code>organization</code>
* <code>height</code>
* <code>weight</code>
* <code>single_hand_payload</code>
* <code>two_hand_payload</code>
* <code>cost</code>
* <code>video</code>
* <code>purchase_link</code>
[[Category:Templates]]</noinclude>
f9e98c710373655935f9b24db331d5cb0839e633
147
143
2024-04-24T07:12:16Z
Mrroboto
5
wikitext
text/x-wiki
{{infobox
| key1 = Name
| value1 = {{{name}}}
| key2 = Organization
| value2 = {{{organization}}}
| key3 = Video
| value3 = {{{video}}}
| key4 = Purchase Link
| value4 = {{{purchase_link}}}
}}
ba2714a86aa37393cc9b0346c27b6f22eda993e8
148
147
2024-04-24T07:13:06Z
Mrroboto
5
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Organization
| value2 = {{{organization}}}
| key3 = Video
| value3 = {{{video}}}
| key4 = Purchase Link
| value4 = {{{purchase_link}}}
}}
49435000619f6d642d099c22d85c698d8724f775
149
148
2024-04-24T07:14:18Z
Mrroboto
5
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Organization
| value2 = {{{organization}}}
| key3 = Video
| value3 = {{#if: {{{video|}}} | [{{{video}}} Video] }}
| key4 = Purchase
| value4 = {{#if: {{{purchase_link|}}} | [{{{purchase_link}}} Link] }}
}}
6673131dca65c16a87cc3f403e2602c87195dd77
151
149
2024-04-24T07:15:40Z
Mrroboto
5
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Organization
| value2 = {{{organization}}}
| key3 = {{#if: {{{video|}}} | Video }}
| value3 = [{{{video}}} Video]
| key4 = {{#if: {{{purchase_link|}}} | Purchase }}
| value4 = [{{{purchase_link}}} Link]
}}
c5fec06d3992574d88eeb0ed047aa5e87d1ac3f4
MediaWiki:Common.css
8
27
109
88
2024-04-24T05:16:21Z
Admin
1
css
text/css
/* CSS placed here will be applied to all skins */
.infobox {
border: 1px solid #aaaaaa;
background-color: #f9f9f9;
padding: 5px;
width: 300px;
float: right;
margin: 10px 0 10px 10px;
}
.infobox th {
background-color: #e0e0e0;
text-align: center;
padding: 2px 5px;
}
.infobox td {
padding: 2px 5px;
}
8fcebcc4a2b165c433ff7bdc6ecf620967085ded
118
109
2024-04-24T05:23:32Z
Admin
1
css
text/css
/* CSS placed here will be applied to all skins */
.infobox {
border: 1px solid #aaaaaa;
background-color: #f9f9f9;
padding: 5px;
width: 300px;
}
.infobox th {
background-color: #e0e0e0;
text-align: center;
padding: 2px 5px;
}
.infobox td {
padding: 2px 5px;
}
f49b4037e90fb4b453cf022dfb913fe3f36f34be
120
118
2024-04-24T05:24:05Z
Admin
1
css
text/css
/* CSS placed here will be applied to all skins */
.infobox {
border: 1px solid #aaaaaa;
background-color: #f9f9f9;
padding: 5px;
width: 300px;
margin-top: 20px;
margin-bottom: 20px;
}
.infobox th {
background-color: #e0e0e0;
text-align: center;
padding: 2px 5px;
}
.infobox td {
padding: 2px 5px;
}
a904b3fdeefdb2ef478c3ef8b6e850dd88683927
141
120
2024-04-24T07:05:48Z
Admin
1
css
text/css
/* CSS placed here will be applied to all skins */
.infobox {
border: 1px solid #aaaaaa;
background-color: #f9f9f9;
padding: 5px;
width: 300px;
font-size: 90%;
margin-top: 20px;
margin-bottom: 20px;
}
.infobox th {
background-color: #e0e0e0;
text-align: center;
padding: 2px 5px;
}
.infobox td {
padding: 2px 5px;
}
c965a4d4f15316e124ff6af9fae0e1b7471fe083
142
141
2024-04-24T07:06:08Z
Admin
1
css
text/css
/* CSS placed here will be applied to all skins */
.infobox {
border: 1px solid #aaaaaa;
background-color: #f9f9f9;
padding: 5px;
width: 300px;
font-size: 90%;
margin-top: 20px;
margin-bottom: 20px;
}
.infobox th {
background-color: #e0e0e0;
text-align: center;
padding: 2px 5px;
text-align: center;
background-color: #cccccc;
}
.infobox td {
padding: 2px 5px;
}
546e7039b9abd8427f3adf807b3081cf65c596f4
Template:Infobox company
10
25
115
98
2024-04-24T05:22:25Z
Admin
1
wikitext
text/x-wiki
<includeonly><table class="infobox" style="width: 300px; font-size: 90%;"><tr><th colspan="2" style="text-align: center; background-color: #cccccc;">{{{name}}}</th></tr>{{#if: {{{country|}}} | <tr><th>Country</th><td>{{{country}}}</td></tr> }}{{#if: {{{website|}}} | <tr><th>Website</th><td>{{{website}}}</td></tr> }}{{#if: {{{robots|}}} | <tr><th>Robots</th><td>{{{robots}}}</td></tr> }}</table></includeonly><noinclude>This is the template for the company infobox.
Fields are:
* <code>name</code>
* <code>country</code>
* <code>website</code>
* <code>robots</code>
[[Category:Templates]]</noinclude>
8dcbacb12f7bf1c36d7f01226a71fac02ce0d1a7
Tesla
0
7
117
99
2024-04-24T05:23:00Z
Admin
1
wikitext
text/x-wiki
{{infobox company
| name = Tesla
| country = United States
| website = https://www.tesla.com/
| robots = [[Optimus]]
}}
Tesla is building a humanoid robot [[Optimus]].
[[Category:Companies]]
c50774886424a0db025582065610a1dcc1b1587d
121
117
2024-04-24T05:24:17Z
Admin
1
wikitext
text/x-wiki
Tesla is building a humanoid robot [[Optimus]].
{{infobox company
| name = Tesla
| country = United States
| website = https://www.tesla.com/
| robots = [[Optimus]]
}}
[[Category:Companies]]
c6fa48baea02ce0575a318a3485e2db5e3a159ad
122
121
2024-04-24T05:24:27Z
Admin
1
wikitext
text/x-wiki
Tesla is building a humanoid robot called [[Optimus]].
{{infobox company
| name = Tesla
| country = United States
| website = https://www.tesla.com/
| robots = [[Optimus]]
}}
[[Category:Companies]]
9379163b9886449543635e50f77ca3d807f73dc5
Main Page
0
1
123
97
2024-04-24T06:26:40Z
69.181.66.238
0
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[DigitV3]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[EVE]], [[NEO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Era Robotics]]
| [[XBot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
c7fc6a3b6e403e5c88a273cecdedff5ae1366758
Learning algorithms
0
32
127
2024-04-24T06:36:45Z
69.181.66.238
0
Created page with "= Learning algorithms = Learning algorithms allow to train humanoids to perform different skills such as manipulation or locomotion. Below is an overview of general approaches..."
wikitext
text/x-wiki
= Learning algorithms =
Learning algorithms allow to train humanoids to perform different skills such as manipulation or locomotion. Below is an overview of general approaches to training machine learning models for humanoid robots.
== Physics engines ==
Physics engines are software libraries designed to simulate physical systems in a virtual environment. They are crucial in a variety of fields such as video games, animation, robotics, and engineering simulations. These engines handle the mathematics involved in simulating physical processes like motion, collisions, and fluid dynamics.
===PhysX===
PhysX is a physics engine by NVIDIA used primarily for video games and real-time simulations. It supports rigid body dynamics, cloth simulation, and particle effects, enhancing realism and interactivity in 3D environments.
===Mujoco===
MuJoCo (Multi-Joint dynamics with Contact) is a physics engine designed for research in robotics and biomechanics. It's known for its speed, accuracy, and ease of use, making it popular for simulating complex systems with robotics and articulated structures.
== Training frameworks ==
Popular training frameworks
===Isaac Gym===
Isaac Gym is NVIDIA's robotics simulation tool, part of the Isaac SDK. It leverages GPU acceleration to enable the simulation of thousands of robot bodies simultaneously, making it highly efficient for training machine learning models in robotics. It's designed to streamline robotics applications, focusing on reinforcement learning in a virtual environment.
===Gymnasium===
Gymnasium is an open-source toolkit for developing and comparing reinforcement learning algorithms. Originally developed by OpenAI as "Gym," it provides a standardized set of environments (like Atari games, robotic simulations, etc.) to test and benchmark AI algorithms. It's widely used in the AI research community to foster innovation and replication in RL studies.
== Training algorithms ==
===Imitation learning===
Imitation Learning is a technique where models learn to perform tasks by mimicking expert behaviors. This approach is often used when defining explicit reward functions is challenging. It accelerates learning by using pre-collected datasets of expert demonstrations, reducing the need for trial-and-error in initial learning phases.
===Reinforcement Learning===
Reinforcement Learning involves agents learning to make decisions by interacting with an environment to maximize cumulative rewards. It's foundational in fields where sequential decision-making is crucial, like gaming, autonomous vehicles, and robotics. RL uses methods like Q-learning and policy gradient to iteratively improve agent performance based on feedback from the environment.
b78ad4ec7eb2be38016e656a01cf1c1a172e25e6
130
127
2024-04-24T06:41:45Z
69.181.66.238
0
/* Reinforcement Learning */
wikitext
text/x-wiki
= Learning algorithms =
Learning algorithms allow to train humanoids to perform different skills such as manipulation or locomotion. Below is an overview of general approaches to training machine learning models for humanoid robots.
== Physics engines ==
Physics engines are software libraries designed to simulate physical systems in a virtual environment. They are crucial in a variety of fields such as video games, animation, robotics, and engineering simulations. These engines handle the mathematics involved in simulating physical processes like motion, collisions, and fluid dynamics.
===PhysX===
PhysX is a physics engine by NVIDIA used primarily for video games and real-time simulations. It supports rigid body dynamics, cloth simulation, and particle effects, enhancing realism and interactivity in 3D environments.
===Mujoco===
MuJoCo (Multi-Joint dynamics with Contact) is a physics engine designed for research in robotics and biomechanics. It's known for its speed, accuracy, and ease of use, making it popular for simulating complex systems with robotics and articulated structures.
== Training frameworks ==
Popular training frameworks
===Isaac Gym===
Isaac Gym is NVIDIA's robotics simulation tool, part of the Isaac SDK. It leverages GPU acceleration to enable the simulation of thousands of robot bodies simultaneously, making it highly efficient for training machine learning models in robotics. It's designed to streamline robotics applications, focusing on reinforcement learning in a virtual environment.
===Gymnasium===
Gymnasium is an open-source toolkit for developing and comparing reinforcement learning algorithms. Originally developed by OpenAI as "Gym," it provides a standardized set of environments (like Atari games, robotic simulations, etc.) to test and benchmark AI algorithms. It's widely used in the AI research community to foster innovation and replication in RL studies.
== Training algorithms ==
===Imitation learning===
Imitation Learning is a technique where models learn to perform tasks by mimicking expert behaviors. This approach is often used when defining explicit reward functions is challenging. It accelerates learning by using pre-collected datasets of expert demonstrations, reducing the need for trial-and-error in initial learning phases.
===[[Reinforcement Learning]]===
Reinforcement Learning involves agents learning to make decisions by interacting with an environment to maximize cumulative rewards. It's foundational in fields where sequential decision-making is crucial, like gaming, autonomous vehicles, and robotics. RL uses methods like Q-learning and policy gradient to iteratively improve agent performance based on feedback from the environment.
59b94a0456b67cf00600b3b2af314df3fb5e7183
132
130
2024-04-24T06:44:15Z
69.181.66.238
0
wikitext
text/x-wiki
= Learning algorithms =
Learning algorithms allow to train humanoids to perform different skills such as manipulation or locomotion. Below is an overview of general approaches to training machine learning models for humanoid robots.
== Physics engines ==
Physics engines are software libraries designed to simulate physical systems in a virtual environment. They are crucial in a variety of fields such as video games, animation, robotics, and engineering simulations. These engines handle the mathematics involved in simulating physical processes like motion, collisions, and fluid dynamics.
===PhysX===
PhysX is a physics engine by NVIDIA used primarily for video games and real-time simulations. It supports rigid body dynamics, cloth simulation, and particle effects, enhancing realism and interactivity in 3D environments.
===Mujoco===
MuJoCo (Multi-Joint dynamics with Contact) is a physics engine designed for research in robotics and biomechanics. It's known for its speed, accuracy, and ease of use, making it popular for simulating complex systems with robotics and articulated structures.
==Simulators==
===[[Isaac Sim]]===
===[[VSim]]===
==Training frameworks==
Popular training frameworks
===Isaac Gym===
Isaac Gym is NVIDIA's robotics simulation tool, part of the Isaac SDK. It leverages GPU acceleration to enable the simulation of thousands of robot bodies simultaneously, making it highly efficient for training machine learning models in robotics. It's designed to streamline robotics applications, focusing on reinforcement learning in a virtual environment.
===Gymnasium===
Gymnasium is an open-source toolkit for developing and comparing reinforcement learning algorithms. Originally developed by OpenAI as "Gym," it provides a standardized set of environments (like Atari games, robotic simulations, etc.) to test and benchmark AI algorithms. It's widely used in the AI research community to foster innovation and replication in RL studies.
== Training algorithms ==
===[[Imitation learning]]===
Imitation Learning is a technique where models learn to perform tasks by mimicking expert behaviors. This approach is often used when defining explicit reward functions is challenging. It accelerates learning by using pre-collected datasets of expert demonstrations, reducing the need for trial-and-error in initial learning phases.
===[[Reinforcement Learning]]===
Reinforcement Learning involves agents learning to make decisions by interacting with an environment to maximize cumulative rewards. It's foundational in fields where sequential decision-making is crucial, like gaming, autonomous vehicles, and robotics. RL uses methods like Q-learning and policy gradient to iteratively improve agent performance based on feedback from the environment.
22e708af83e215987cfcd7921afea72992718b3b
File:Optimus Tesla (1).jpg
6
33
128
2024-04-24T06:37:17Z
Mrroboto
5
wikitext
text/x-wiki
The Optimus robot from Tesla
9baab017b6f3bd9f54f2b4d7fe33d8f428602773
Reinforcement Learning
0
34
131
2024-04-24T06:42:26Z
69.181.66.238
0
Created page with " ==Training algorithms== ===A2C=== ===PPO==="
wikitext
text/x-wiki
==Training algorithms==
===A2C===
===PPO===
95d6e5ffc9d381d85d89b934739af7387fec932f
Template:Infobox
10
35
145
2024-04-24T07:09:11Z
Mrroboto
5
Created page with "<includeonly><table class="infobox"><tr><th colspan="2">{{{name}}}</th></tr>{{#if: {{{key1|}}} | <tr><th>{{{key1}}}</th><td>{{{value1}}}</td></tr> }}{{#if: {{{key2|}}} | <tr><..."
wikitext
text/x-wiki
<includeonly><table class="infobox"><tr><th colspan="2">{{{name}}}</th></tr>{{#if: {{{key1|}}} | <tr><th>{{{key1}}}</th><td>{{{value1}}}</td></tr> }}{{#if: {{{key2|}}} | <tr><th>{{{key2}}}</th><td>{{{value2}}}</td></tr> }}{{#if: {{{key3|}}} | <tr><th>{{{key3}}}</th><td>{{{value3}}}</td></tr> }}{{#if: {{{key4|}}} | <tr><th>{{{key4}}}</th><td>{{{value4}}}</td></tr> }}{{#if: {{{key5|}}} | <tr><th>{{{key5}}}</th><td>{{{value5}}}</td></tr> }}{{#if: {{{key6|}}} | <tr><th>{{{key6}}}</th><td>{{{value6}}}</td></tr> }}{{#if: {{{key7|}}} | <tr><th>{{{key7}}}</th><td>{{{value7}}}</td></tr> }}{{#if: {{{key8|}}} | <tr><th>{{{key8}}}</th><td>{{{value8}}}</td></tr> }}{{#if: {{{key9|}}} | <tr><th>{{{key9}}}</th><td>{{{value9}}}</td></tr> }}{{#if: {{{key10|}}} | <tr><th>{{{key10}}}</th><td>{{{value10}}}</td></tr> }}{{#if: {{{key11|}}} | <tr><th>{{{key11}}}</th><td>{{{value11}}}</td></tr> }}{{#if: {{{key12|}}} | <tr><th>{{{key12}}}</th><td>{{{value12}}}</td></tr> }}{{#if: {{{key13|}}} | <tr><th>{{{key13}}}</th><td>{{{value13}}}</td></tr> }}{{#if: {{{key14|}}} | <tr><th>{{{key14}}}</th><td>{{{value14}}}</td></tr> }}{{#if: {{{key15|}}} | <tr><th>{{{key15}}}</th><td>{{{value15}}}</td></tr> }}{{#if: {{{key16|}}} | <tr><th>{{{key16}}}</th><td>{{{value16}}}</td></tr> }}{{#if: {{{key17|}}} | <tr><th>{{{key17}}}</th><td>{{{value17}}}</td></tr> }}{{#if: {{{key18|}}} | <tr><th>{{{key18}}}</th><td>{{{value18}}}</td></tr> }}{{#if: {{{key19|}}} | <tr><th>{{{key19}}}</th><td>{{{value19}}}</td></tr> }}{{#if: {{{key20|}}} | <tr><th>{{{key20}}}</th><td>{{{value20}}}</td></tr> }}</table></includeonly><noinclude>This is the template for the robot infobox.
Fields are:
* <code>name</code>
* <code>organization</code>
* <code>height</code>
* <code>weight</code>
* <code>single_hand_payload</code>
* <code>two_hand_payload</code>
* <code>cost</code>
* <code>video</code>
* <code>purchase_link</code>
[[Category:Templates]]</noinclude>
deb580d5370d8c6d4555b8f5c3424a4917d34e97
146
145
2024-04-24T07:10:27Z
Mrroboto
5
wikitext
text/x-wiki
<includeonly><table class="infobox"><tr><th colspan="2">{{{name}}}</th></tr>{{#if: {{{key1|}}} | <tr><th>{{{key1}}}</th><td>{{{value1}}}</td></tr> }}{{#if: {{{key2|}}} | <tr><th>{{{key2}}}</th><td>{{{value2}}}</td></tr> }}{{#if: {{{key3|}}} | <tr><th>{{{key3}}}</th><td>{{{value3}}}</td></tr> }}{{#if: {{{key4|}}} | <tr><th>{{{key4}}}</th><td>{{{value4}}}</td></tr> }}{{#if: {{{key5|}}} | <tr><th>{{{key5}}}</th><td>{{{value5}}}</td></tr> }}{{#if: {{{key6|}}} | <tr><th>{{{key6}}}</th><td>{{{value6}}}</td></tr> }}{{#if: {{{key7|}}} | <tr><th>{{{key7}}}</th><td>{{{value7}}}</td></tr> }}{{#if: {{{key8|}}} | <tr><th>{{{key8}}}</th><td>{{{value8}}}</td></tr> }}{{#if: {{{key9|}}} | <tr><th>{{{key9}}}</th><td>{{{value9}}}</td></tr> }}{{#if: {{{key10|}}} | <tr><th>{{{key10}}}</th><td>{{{value10}}}</td></tr> }}{{#if: {{{key11|}}} | <tr><th>{{{key11}}}</th><td>{{{value11}}}</td></tr> }}{{#if: {{{key12|}}} | <tr><th>{{{key12}}}</th><td>{{{value12}}}</td></tr> }}{{#if: {{{key13|}}} | <tr><th>{{{key13}}}</th><td>{{{value13}}}</td></tr> }}{{#if: {{{key14|}}} | <tr><th>{{{key14}}}</th><td>{{{value14}}}</td></tr> }}{{#if: {{{key15|}}} | <tr><th>{{{key15}}}</th><td>{{{value15}}}</td></tr> }}{{#if: {{{key16|}}} | <tr><th>{{{key16}}}</th><td>{{{value16}}}</td></tr> }}{{#if: {{{key17|}}} | <tr><th>{{{key17}}}</th><td>{{{value17}}}</td></tr> }}{{#if: {{{key18|}}} | <tr><th>{{{key18}}}</th><td>{{{value18}}}</td></tr> }}{{#if: {{{key19|}}} | <tr><th>{{{key19}}}</th><td>{{{value19}}}</td></tr> }}{{#if: {{{key20|}}} | <tr><th>{{{key20}}}</th><td>{{{value20}}}</td></tr> }}</table></includeonly><noinclude>This is the template for the robot infobox.
Fields are:
{{#if: {{{key1|}}} | * <code>{{{key1}}}</code> <code>{{{value1}}}</code> }}
{{#if: {{{key2|}}} | * <code>{{{key2}}}</code> <code>{{{value2}}}</code> }}
{{#if: {{{key3|}}} | * <code>{{{key3}}}</code> <code>{{{value3}}}</code> }}
{{#if: {{{key4|}}} | * <code>{{{key4}}}</code> <code>{{{value4}}}</code> }}
{{#if: {{{key5|}}} | * <code>{{{key5}}}</code> <code>{{{value5}}}</code> }}
[[Category:Templates]]</noinclude>
69d696dd41b099fd45ff988b79456a23d88e0c7f
152
146
2024-04-24T07:18:21Z
Mrroboto
5
wikitext
text/x-wiki
<includeonly>
<table class="infobox">
<tr>
<th colspan="2">{{{name}}}</th>
</tr>{{#if: {{{key1|}}} | <tr>
<td class="infobox-left">{{{key1}}}</td>
<td class="infobox-right">{{{value1}}}</td>
</tr> }}{{#if: {{{key2|}}} | <tr>
<td class="infobox-left">{{{key2}}}</td>
<td class="infobox-right">{{{value2}}}</td>
</tr> }}{{#if: {{{key3|}}} | <tr>
<td class="infobox-left">{{{key3}}}</td>
<td class="infobox-right">{{{value3}}}</td>
</tr> }}{{#if: {{{key4|}}} | <tr>
<td class="infobox-left">{{{key4}}}</td>
<td class="infobox-right">{{{value4}}}</td>
</tr> }}{{#if: {{{key5|}}} | <tr>
<td class="infobox-left">{{{key5}}}</td>
<td class="infobox-right">{{{value5}}}</td>
</tr> }}{{#if: {{{key6|}}} | <tr>
<td class="infobox-left">{{{key6}}}</td>
<td class="infobox-right">{{{value6}}}</td>
</tr> }}{{#if: {{{key7|}}} | <tr>
<td class="infobox-left">{{{key7}}}</td>
<td class="infobox-right">{{{value7}}}</td>
</tr> }}{{#if: {{{key8|}}} | <tr>
<td class="infobox-left">{{{key8}}}</td>
<td class="infobox-right">{{{value8}}}</td>
</tr> }}{{#if: {{{key9|}}} | <tr>
<td class="infobox-left">{{{key9}}}</td>
<td class="infobox-right">{{{value9}}}</td>
</tr> }}{{#if: {{{key10|}}} | <tr>
<td class="infobox-left">{{{key10}}}</td>
<td class="infobox-right">{{{value10}}}</td>
</tr> }}{{#if: {{{key11|}}} | <tr>
<td class="infobox-left">{{{key11}}}</td>
<td class="infobox-right">{{{value11}}}</td>
</tr> }}{{#if: {{{key12|}}} | <tr>
<td class="infobox-left">{{{key12}}}</td>
<td class="infobox-right">{{{value12}}}</td>
</tr> }}{{#if: {{{key13|}}} | <tr>
<td class="infobox-left">{{{key13}}}</td>
<td class="infobox-right">{{{value13}}}</td>
</tr> }}{{#if: {{{key14|}}} | <tr>
<td class="infobox-left">{{{key14}}}</td>
<td class="infobox-right">{{{value14}}}</td>
</tr> }}{{#if: {{{key15|}}} | <tr>
<td class="infobox-left">{{{key15}}}</td>
<td class="infobox-right">{{{value15}}}</td>
</tr> }}{{#if: {{{key16|}}} | <tr>
<td class="infobox-left">{{{key16}}}</td>
<td class="infobox-right">{{{value16}}}</td>
</tr> }}{{#if: {{{key17|}}} | <tr>
<td class="infobox-left">{{{key17}}}</td>
<td class="infobox-right">{{{value17}}}</td>
</tr> }}{{#if: {{{key18|}}} | <tr>
<td class="infobox-left">{{{key18}}}</td>
<td class="infobox-right">{{{value18}}}</td>
</tr> }}{{#if: {{{key19|}}} | <tr>
<td class="infobox-left">{{{key19}}}</td>
<td class="infobox-right">{{{value19}}}</td>
</tr> }}{{#if: {{{key20|}}} | <tr>
<td class="infobox-left">{{{key20}}}</td>
<td class="infobox-right">{{{value20}}}</td>
</tr> }}
</table>
</includeonly>
<noinclude>This is the template for the robot infobox.
Fields are:
{{#if: {{{key1|}}} | * <code>{{{key1}}}</code> <code>{{{value1}}}</code> }}
{{#if: {{{key2|}}} | * <code>{{{key2}}}</code> <code>{{{value2}}}</code> }}
{{#if: {{{key3|}}} | * <code>{{{key3}}}</code> <code>{{{value3}}}</code> }}
{{#if: {{{key4|}}} | * <code>{{{key4}}}</code> <code>{{{value4}}}</code> }}
{{#if: {{{key5|}}} | * <code>{{{key5}}}</code> <code>{{{value5}}}</code> }}
[[Category:Templates]]</noinclude>
766cbb3c5719538484dc22623466543525d8dab1
Template:Infobox
10
35
153
152
2024-04-24T07:19:46Z
Mrroboto
5
wikitext
text/x-wiki
<includeonly><table class="infobox"><tr><th colspan="2">{{{name}}}</th></tr>{{#if: {{{key1|}}} | <tr><td class="infobox-left">{{{key1}}}</td><td class="infobox-right">{{{value1}}}</td></tr> }}{{#if: {{{key2|}}} | <tr><td class="infobox-left">{{{key2}}}</td><td class="infobox-right">{{{value2}}}</td></tr> }}{{#if: {{{key3|}}} | <tr><td class="infobox-left">{{{key3}}}</td><td class="infobox-right">{{{value3}}}</td></tr> }}{{#if: {{{key4|}}} | <tr><td class="infobox-left">{{{key4}}}</td><td class="infobox-right">{{{value4}}}</td></tr> }}{{#if: {{{key5|}}} | <tr><td class="infobox-left">{{{key5}}}</td><td class="infobox-right">{{{value5}}}</td></tr> }}{{#if: {{{key6|}}} | <tr><td class="infobox-left">{{{key6}}}</td><td class="infobox-right">{{{value6}}}</td></tr> }}{{#if: {{{key7|}}} | <tr><td class="infobox-left">{{{key7}}}</td><td class="infobox-right">{{{value7}}}</td></tr> }}{{#if: {{{key8|}}} | <tr><td class="infobox-left">{{{key8}}}</td><td class="infobox-right">{{{value8}}}</td></tr> }}{{#if: {{{key9|}}} | <tr><td class="infobox-left">{{{key9}}}</td><td class="infobox-right">{{{value9}}}</td></tr> }}{{#if: {{{key10|}}} | <tr><td class="infobox-left">{{{key10}}}</td><td class="infobox-right">{{{value10}}}</td></tr> }}{{#if: {{{key11|}}} | <tr><td class="infobox-left">{{{key11}}}</td><td class="infobox-right">{{{value11}}}</td></tr> }}{{#if: {{{key12|}}} | <tr><td class="infobox-left">{{{key12}}}</td><td class="infobox-right">{{{value12}}}</td></tr> }}{{#if: {{{key13|}}} | <tr><td class="infobox-left">{{{key13}}}</td><td class="infobox-right">{{{value13}}}</td></tr> }}{{#if: {{{key14|}}} | <tr><td class="infobox-left">{{{key14}}}</td><td class="infobox-right">{{{value14}}}</td></tr> }}{{#if: {{{key15|}}} | <tr><td class="infobox-left">{{{key15}}}</td><td class="infobox-right">{{{value15}}}</td></tr> }}{{#if: {{{key16|}}} | <tr><td class="infobox-left">{{{key16}}}</td><td class="infobox-right">{{{value16}}}</td></tr> }}{{#if: {{{key17|}}} | <tr><td class="infobox-left">{{{key17}}}</td><td class="infobox-right">{{{value17}}}</td></tr> }}{{#if: {{{key18|}}} | <tr><td class="infobox-left">{{{key18}}}</td><td class="infobox-right">{{{value18}}}</td></tr> }}{{#if: {{{key19|}}} | <tr><td class="infobox-left">{{{key19}}}</td><td class="infobox-right">{{{value19}}}</td></tr> }}{{#if: {{{key20|}}} | <tr><td class="infobox-left">{{{key20}}}</td><td class="infobox-right">{{{value20}}}</td></tr> }}</table></includeonly>
<noinclude>This is the template for the robot infobox.
Fields are:
{{#if: {{{key1|}}} | * <code>{{{key1}}}</code> <code>{{{value1}}}</code> }}
{{#if: {{{key2|}}} | * <code>{{{key2}}}</code> <code>{{{value2}}}</code> }}
{{#if: {{{key3|}}} | * <code>{{{key3}}}</code> <code>{{{value3}}}</code> }}
{{#if: {{{key4|}}} | * <code>{{{key4}}}</code> <code>{{{value4}}}</code> }}
{{#if: {{{key5|}}} | * <code>{{{key5}}}</code> <code>{{{value5}}}</code> }}
[[Category:Templates]]</noinclude>
45102a63a1ca8ef42e14b016d0be3b81cf2a39f9
157
153
2024-04-24T07:25:29Z
Mrroboto
5
wikitext
text/x-wiki
<includeonly><table class="infobox"><tr><th colspan="2">{{{name}}}</th></tr>{{#if: {{{value1|}}} | <tr><td class="infobox-left">{{{key1}}}</td><td class="infobox-right">{{{value1}}}</td></tr> }}{{#if: {{{value2|}}} | <tr><td class="infobox-left">{{{key2}}}</td><td class="infobox-right">{{{value2}}}</td></tr> }}{{#if: {{{value3|}}} | <tr><td class="infobox-left">{{{key3}}}</td><td class="infobox-right">{{{value3}}}</td></tr> }}{{#if: {{{value4|}}} | <tr><td class="infobox-left">{{{key4}}}</td><td class="infobox-right">{{{value4}}}</td></tr> }}{{#if: {{{value5|}}} | <tr><td class="infobox-left">{{{key5}}}</td><td class="infobox-right">{{{value5}}}</td></tr> }}{{#if: {{{value6|}}} | <tr><td class="infobox-left">{{{key6}}}</td><td class="infobox-right">{{{value6}}}</td></tr> }}{{#if: {{{value7|}}} | <tr><td class="infobox-left">{{{key7}}}</td><td class="infobox-right">{{{value7}}}</td></tr> }}{{#if: {{{value8|}}} | <tr><td class="infobox-left">{{{key8}}}</td><td class="infobox-right">{{{value8}}}</td></tr> }}{{#if: {{{value9|}}} | <tr><td class="infobox-left">{{{key9}}}</td><td class="infobox-right">{{{value9}}}</td></tr> }}{{#if: {{{value10|}}} | <tr><td class="infobox-left">{{{key10}}}</td><td class="infobox-right">{{{value10}}}</td></tr> }}{{#if: {{{value11|}}} | <tr><td class="infobox-left">{{{key11}}}</td><td class="infobox-right">{{{value11}}}</td></tr> }}{{#if: {{{value12|}}} | <tr><td class="infobox-left">{{{key12}}}</td><td class="infobox-right">{{{value12}}}</td></tr> }}{{#if: {{{value13|}}} | <tr><td class="infobox-left">{{{key13}}}</td><td class="infobox-right">{{{value13}}}</td></tr> }}{{#if: {{{value14|}}} | <tr><td class="infobox-left">{{{key14}}}</td><td class="infobox-right">{{{value14}}}</td></tr> }}{{#if: {{{value15|}}} | <tr><td class="infobox-left">{{{key15}}}</td><td class="infobox-right">{{{value15}}}</td></tr> }}{{#if: {{{value16|}}} | <tr><td class="infobox-left">{{{key16}}}</td><td class="infobox-right">{{{value16}}}</td></tr> }}{{#if: {{{value17|}}} | <tr><td class="infobox-left">{{{key17}}}</td><td class="infobox-right">{{{value17}}}</td></tr> }}{{#if: {{{value18|}}} | <tr><td class="infobox-left">{{{key18}}}</td><td class="infobox-right">{{{value18}}}</td></tr> }}{{#if: {{{value19|}}} | <tr><td class="infobox-left">{{{key19}}}</td><td class="infobox-right">{{{value19}}}</td></tr> }}{{#if: {{{value20|}}} | <tr><td class="infobox-left">{{{key20}}}</td><td class="infobox-right">{{{value20}}}</td></tr> }}</table></includeonly>
<noinclude>This is the template for the robot infobox.
Fields are:
{{#if: {{{value1|}}} | * <code>{{{key1}}}</code> <code>{{{value1}}}</code> }}
{{#if: {{{value2|}}} | * <code>{{{key2}}}</code> <code>{{{value2}}}</code> }}
{{#if: {{{value3|}}} | * <code>{{{key3}}}</code> <code>{{{value3}}}</code> }}
{{#if: {{{value4|}}} | * <code>{{{key4}}}</code> <code>{{{value4}}}</code> }}
{{#if: {{{value5|}}} | * <code>{{{key5}}}</code> <code>{{{value5}}}</code> }}
[[Category:Templates]]</noinclude>
aa6245087d4a18d29673c0b6fdaa5e450e0094fb
161
157
2024-04-24T07:27:23Z
Mrroboto
5
wikitext
text/x-wiki
<includeonly><table class="infobox"><tr><th colspan="2">{{{name}}}</th></tr>{{#if: {{{value1|}}} | <tr><td class="infobox-left">{{{key1}}}</td><td class="infobox-right">{{{value1}}}</td></tr> }}{{#if: {{{value2|}}} | <tr><td class="infobox-left">{{{key2}}}</td><td class="infobox-right">{{{value2}}}</td></tr> }}{{#if: {{{value3|}}} | <tr><td class="infobox-left">{{{key3}}}</td><td class="infobox-right">{{{value3}}}</td></tr> }}{{#if: {{{value4|}}} | <tr><td class="infobox-left">{{{key4}}}</td><td class="infobox-right">{{{value4}}}</td></tr> }}{{#if: {{{value5|}}} | <tr><td class="infobox-left">{{{key5}}}</td><td class="infobox-right">{{{value5}}}</td></tr> }}{{#if: {{{value6|}}} | <tr><td class="infobox-left">{{{key6}}}</td><td class="infobox-right">{{{value6}}}</td></tr> }}{{#if: {{{value7|}}} | <tr><td class="infobox-left">{{{key7}}}</td><td class="infobox-right">{{{value7}}}</td></tr> }}{{#if: {{{value8|}}} | <tr><td class="infobox-left">{{{key8}}}</td><td class="infobox-right">{{{value8}}}</td></tr> }}{{#if: {{{value9|}}} | <tr><td class="infobox-left">{{{key9}}}</td><td class="infobox-right">{{{value9}}}</td></tr> }}{{#if: {{{value10|}}} | <tr><td class="infobox-left">{{{key10}}}</td><td class="infobox-right">{{{value10}}}</td></tr> }}{{#if: {{{value11|}}} | <tr><td class="infobox-left">{{{key11}}}</td><td class="infobox-right">{{{value11}}}</td></tr> }}{{#if: {{{value12|}}} | <tr><td class="infobox-left">{{{key12}}}</td><td class="infobox-right">{{{value12}}}</td></tr> }}{{#if: {{{value13|}}} | <tr><td class="infobox-left">{{{key13}}}</td><td class="infobox-right">{{{value13}}}</td></tr> }}{{#if: {{{value14|}}} | <tr><td class="infobox-left">{{{key14}}}</td><td class="infobox-right">{{{value14}}}</td></tr> }}{{#if: {{{value15|}}} | <tr><td class="infobox-left">{{{key15}}}</td><td class="infobox-right">{{{value15}}}</td></tr> }}{{#if: {{{value16|}}} | <tr><td class="infobox-left">{{{key16}}}</td><td class="infobox-right">{{{value16}}}</td></tr> }}{{#if: {{{value17|}}} | <tr><td class="infobox-left">{{{key17}}}</td><td class="infobox-right">{{{value17}}}</td></tr> }}{{#if: {{{value18|}}} | <tr><td class="infobox-left">{{{key18}}}</td><td class="infobox-right">{{{value18}}}</td></tr> }}{{#if: {{{value19|}}} | <tr><td class="infobox-left">{{{key19}}}</td><td class="infobox-right">{{{value19}}}</td></tr> }}{{#if: {{{value20|}}} | <tr><td class="infobox-left">{{{key20}}}</td><td class="infobox-right">{{{value20}}}</td></tr> }}</table></includeonly>
<noinclude>This is the template for the robot infobox.
Fields are:
{{#if: {{{key1|}}} | * <code>{{{key1}}}</code> <code>{{{value1}}}</code> }}
{{#if: {{{key2|}}} | * <code>{{{key2}}}</code> <code>{{{value2}}}</code> }}
{{#if: {{{key3|}}} | * <code>{{{key3}}}</code> <code>{{{value3}}}</code> }}
{{#if: {{{key4|}}} | * <code>{{{key4}}}</code> <code>{{{value4}}}</code> }}
{{#if: {{{key5|}}} | * <code>{{{key5}}}</code> <code>{{{value5}}}</code> }}
[[Category:Templates]]</noinclude>
f49f16982aa7cb7e0e2f9da1acc2ff7fd7798b50
184
161
2024-04-24T07:48:01Z
Mrroboto
5
wikitext
text/x-wiki
<includeonly><table class="infobox"><tr><th colspan="2">{{{name}}}</th></tr>{{#if: {{{value1|}}} | <tr><td class="infobox-left">{{{key1}}}</td><td class="infobox-right">{{{value1}}}</td></tr> }}{{#if: {{{value2|}}} | <tr><td class="infobox-left">{{{key2}}}</td><td class="infobox-right">{{{value2}}}</td></tr> }}{{#if: {{{value3|}}} | <tr><td class="infobox-left">{{{key3}}}</td><td class="infobox-right">{{{value3}}}</td></tr> }}{{#if: {{{value4|}}} | <tr><td class="infobox-left">{{{key4}}}</td><td class="infobox-right">{{{value4}}}</td></tr> }}{{#if: {{{value5|}}} | <tr><td class="infobox-left">{{{key5}}}</td><td class="infobox-right">{{{value5}}}</td></tr> }}{{#if: {{{value6|}}} | <tr><td class="infobox-left">{{{key6}}}</td><td class="infobox-right">{{{value6}}}</td></tr> }}{{#if: {{{value7|}}} | <tr><td class="infobox-left">{{{key7}}}</td><td class="infobox-right">{{{value7}}}</td></tr> }}{{#if: {{{value8|}}} | <tr><td class="infobox-left">{{{key8}}}</td><td class="infobox-right">{{{value8}}}</td></tr> }}{{#if: {{{value9|}}} | <tr><td class="infobox-left">{{{key9}}}</td><td class="infobox-right">{{{value9}}}</td></tr> }}{{#if: {{{value10|}}} | <tr><td class="infobox-left">{{{key10}}}</td><td class="infobox-right">{{{value10}}}</td></tr> }}{{#if: {{{value11|}}} | <tr><td class="infobox-left">{{{key11}}}</td><td class="infobox-right">{{{value11}}}</td></tr> }}{{#if: {{{value12|}}} | <tr><td class="infobox-left">{{{key12}}}</td><td class="infobox-right">{{{value12}}}</td></tr> }}{{#if: {{{value13|}}} | <tr><td class="infobox-left">{{{key13}}}</td><td class="infobox-right">{{{value13}}}</td></tr> }}{{#if: {{{value14|}}} | <tr><td class="infobox-left">{{{key14}}}</td><td class="infobox-right">{{{value14}}}</td></tr> }}{{#if: {{{value15|}}} | <tr><td class="infobox-left">{{{key15}}}</td><td class="infobox-right">{{{value15}}}</td></tr> }}{{#if: {{{value16|}}} | <tr><td class="infobox-left">{{{key16}}}</td><td class="infobox-right">{{{value16}}}</td></tr> }}{{#if: {{{value17|}}} | <tr><td class="infobox-left">{{{key17}}}</td><td class="infobox-right">{{{value17}}}</td></tr> }}{{#if: {{{value18|}}} | <tr><td class="infobox-left">{{{key18}}}</td><td class="infobox-right">{{{value18}}}</td></tr> }}{{#if: {{{value19|}}} | <tr><td class="infobox-left">{{{key19}}}</td><td class="infobox-right">{{{value19}}}</td></tr> }}{{#if: {{{value20|}}} | <tr><td class="infobox-left">{{{key20}}}</td><td class="infobox-right">{{{value20}}}</td></tr> }}</table></includeonly><noinclude>This is the template for the robot infobox.
Fields are:
{{#if: {{{key1|}}} | * <code>{{{key1}}}</code> <code>{{{value1}}}</code> }}
{{#if: {{{key2|}}} | * <code>{{{key2}}}</code> <code>{{{value2}}}</code> }}
{{#if: {{{key3|}}} | * <code>{{{key3}}}</code> <code>{{{value3}}}</code> }}
{{#if: {{{key4|}}} | * <code>{{{key4}}}</code> <code>{{{value4}}}</code> }}
{{#if: {{{key5|}}} | * <code>{{{key5}}}</code> <code>{{{value5}}}</code> }}
[[Category:Templates]]</noinclude>
7263756283434f53955e5728415cc65c7bbd6e6e
Template:Infobox company
10
25
154
115
2024-04-24T07:21:38Z
Mrroboto
5
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Country
| value2 = {{{country}}}
| key3 = {{#if: {{{website|}}} | Website }}
| value3 = [{{{website}}} Website]
| key4 = Robots
| value4 = {{{robots}}}
}}
f2abf70039bcc3854452a5cdebaabcb05cf47ce5
159
154
2024-04-24T07:26:15Z
Mrroboto
5
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Country
| value2 = {{{country}}}
| key3 = Website
| value3 = {{#if: {{{website|}}} | [{{{website}}} Website] }}
| key4 = Robots
| value4 = {{{robots}}}
}}
94f3e8738aa4a055425b603633a7e3bb631a08a5
160
159
2024-04-24T07:26:45Z
Mrroboto
5
wikitext
text/x-wiki
{{infobox
| name = {{{name|}}}
| key1 = Name
| value1 = {{{name|}}}
| key2 = Country
| value2 = {{{country|}}}
| key3 = Website
| value3 = {{#if: {{{website|}}} | [{{{website}}} Website] }}
| key4 = Robots
| value4 ={{{robots|}}}
}}
518ff8998c0efcf9fa4beb15cfc16f6b98e925db
162
160
2024-04-24T07:27:34Z
Mrroboto
5
wikitext
text/x-wiki
{{infobox
| name = {{{name|}}}
| key1 = Name
| value1 = {{{name|}}}
| key2 = Country
| value2 = {{{country|}}}
| key3 = Website
| value3 = {{#if: {{{website|}}} | [{{{website}}} Website] }}
| key4 = Robots
| value4 = {{{robots|}}}
}}
1a87279bf81ff1109842e4a9f846847a10d482ef
164
162
2024-04-24T07:28:27Z
Mrroboto
5
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Country
| value2 = {{{country|}}}
| key3 = Website
| value3 = {{#if: {{{website|}}} | [{{{website}}} Website] }}
| key4 = Robots
| value4 = {{{robots|}}}
}}
b795ceaeeb24837ae6e1d7c67313264ddde0fa74
170
164
2024-04-24T07:32:04Z
Mrroboto
5
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Country
| value2 = {{{country|}}}
| key3 = Website
| value3 = {{#if: {{{website_link|}}} | [{{{website_link}}} Website] }}
| key4 = Robots
| value4 = {{{robots|}}}
}}
f7f6f422d2fbac3594c45877b8a00efdf7533f17
Template:Infobox robot
10
28
155
151
2024-04-24T07:23:19Z
Mrroboto
5
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Organization
| value2 = {{{organization}}}
| key3 = {{#if: {{{video|}}} | Video }}
| value3 = [{{{video}}} Video]
| key4 = Cost
| value4 = {{{cost}}}
| key5 = Height
| value5 = {{{height}}}
| key6 = Weight
| value6 = {{{weight}}}
| key7 = Lift Force
| value7 = {{{lift_force}}}
| key8 = {{#if: {{{purchase_link|}}} | Purchase }}
| value8 = [{{{purchase_link}}} Link]
}}
2297cc7b73ba1223b8d5ad7b78da82cc6d0878d6
158
155
2024-04-24T07:25:33Z
Mrroboto
5
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Organization
| value2 = {{{organization}}}
| key3 = Video
| value3 = {{#if: {{{video|}}} | [{{{video}}} Video] }}
| key4 = Cost
| value4 = {{{cost}}}
| key5 = Height
| value5 = {{{height}}}
| key6 = Weight
| value6 = {{{weight}}}
| key7 = Lift Force
| value7 = {{{lift_force}}}
| key8 = Purchase
| value8 = {{#if: {{{purchase_link|}}} | [{{{purchase_link}}} Link] }}
}}
f6c30963b2982e8060144b66ec27a71ae011798f
163
158
2024-04-24T07:28:20Z
Mrroboto
5
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Organization
| value2 = {{{organization|}}}
| key3 = Video
| value3 = {{#if: {{{video|}}} | [{{{video}}} Video] }}
| key4 = Cost
| value4 = {{{cost|}}}
| key5 = Height
| value5 = {{{height|}}}
| key6 = Weight
| value6 = {{{weight|}}}
| key7 = Lift Force
| value7 = {{{lift_force|}}}
| key8 = Purchase
| value8 = {{#if: {{{purchase_link|}}} | [{{{purchase_link}}} Link] }}
}}
17e56ad5d8a1656f1bbe9e70c495df453f7b3b0b
165
163
2024-04-24T07:29:36Z
Mrroboto
5
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Organization
| value2 = {{{organization|}}}
| key3 = Video
| value3 = {{#if: {{{video|}}} | [{{{video}}} Video] }}
| key4 = Cost
| value4 = {{{cost|}}}
| key5 = Height
| value5 = {{{height|}}}
| key6 = Weight
| value6 = {{{weight|}}}
| key7 = Lift Force
| value7 = {{{lift_force|}}}
| key8 = Battery Life
| value8 = {{{battery_life|}}}
| key8 = Purchase
| value8 = {{#if: {{{purchase_link|}}} | [{{{purchase_link}}} Link] }}
}}
034b28a570c64835ccc35995caefa2cf31731606
180
165
2024-04-24T07:42:41Z
Mrroboto
5
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Organization
| value2 = {{{organization|}}}
| key3 = Video
| value3 = {{#if: {{{video_link|}}} | [{{{video_link}}} Video] }}
| key4 = Cost
| value4 = {{{cost|}}}
| key5 = Height
| value5 = {{{height|}}}
| key6 = Weight
| value6 = {{{weight|}}}
| key7 = Lift Force
| value7 = {{{lift_force|}}}
| key8 = Battery Life
| value8 = {{{battery_life|}}}
| key8 = Purchase
| value8 = {{#if: {{{purchase_link|}}} | [{{{purchase_link}}} Link] }}
}}
228b40e88657101cad42969794b866563320727f
181
180
2024-04-24T07:42:57Z
Mrroboto
5
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Organization
| value2 = {{{organization|}}}
| key3 = Video
| value3 = {{#if: {{{video_link|}}} | [{{{video_link}}} Video] }}
| key4 = Cost
| value4 = {{{cost|}}}
| key5 = Height
| value5 = {{{height|}}}
| key6 = Weight
| value6 = {{{weight|}}}
| key7 = Lift Force
| value7 = {{{lift_force|}}}
| key8 = Battery Life
| value8 = {{{battery_life|}}}
| key9 = Purchase
| value9 = {{#if: {{{purchase_link|}}} | [{{{purchase_link}}} Link] }}
}}
8d7ee5633b31454ecde31f8058d00a371cbbf9fa
182
181
2024-04-24T07:45:38Z
Mrroboto
5
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Organization
| value2 = {{{organization|}}}
| key3 = Video
| value3 = {{#if: {{{video_link|}}} | [{{{video_link}}} Video] }}
| key4 = Cost
| value4 = {{{cost|}}}
| key5 = Height
| value5 = {{{height|}}}
| key6 = Weight
| value6 = {{{weight|}}}
| key7 = Speed
| value7 = {{{speed|}}}
| key8 = Lift Force
| value8 = {{{lift_force|}}}
| key9 = Battery Life
| value9 = {{{battery_life|}}}
| key10 = Battery Capacity
| value10 = {{{battery_capacity|}}}
| key11 = Purchase
| value11 = {{#if: {{{purchase_link|}}} | [{{{purchase_link}}} Link] }}
| key12 = Number Made
| value12 = {{{number_made|}}}
| key13 = DoF
| value13 = {{{dof|}}}
| key14 = Status
| value14 = {{{status|}}}
}}
fed766ff398d58d0b86ec28c3ae5bca9a62c414a
Optimus
0
22
156
150
2024-04-24T07:23:42Z
Mrroboto
5
wikitext
text/x-wiki
Optimus is a humanoid robot from [[Tesla]].
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| video = https://www.youtube.com/watch?v=cpraXaw7dyc
| cost = Unknown, rumored $20k
}}
[[File:Optimus Tesla (1).jpg|none|300px|The Tesla Optimus on display|thumb]]
[[Category:Robots]]
aec3b64acc1be73c94c3e1cc4527fa30c10f908c
167
156
2024-04-24T07:30:58Z
Mrroboto
5
wikitext
text/x-wiki
Optimus is a humanoid robot from [[Tesla]].
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| video_link = https://www.youtube.com/watch?v=cpraXaw7dyc
| cost = Unknown, rumored $20k
}}
[[File:Optimus Tesla (1).jpg|none|300px|The Tesla Optimus on display|thumb]]
[[Category:Robots]]
931a6a4042eff44ffac49b42e7a93565d13b34ea
178
167
2024-04-24T07:40:04Z
Mrroboto
5
wikitext
text/x-wiki
Optimus is a humanoid robot from [[Tesla]].
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| video_link = https://www.youtube.com/watch?v=cpraXaw7dyc
| cost = Unknown, rumored $20k
}}
Tesla began work on the Optimus robot in 2021.
[[File:Optimus Tesla (1).jpg|none|300px|The Tesla Optimus on display|thumb]]
[[Category:Robots]]
7d6acc73a5cc062cf4c5b7ac89f85af8aafdbe60
179
178
2024-04-24T07:40:11Z
Mrroboto
5
wikitext
text/x-wiki
Optimus is a humanoid robot from [[Tesla]].
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| video_link = https://www.youtube.com/watch?v=cpraXaw7dyc
| cost = Unknown, rumored $20k
}}
Tesla began work on the Optimus robot in 2021.
[[File:Optimus Tesla (1).jpg|none|300px|The Tesla Optimus on display|thumb]]
[[Category:Robots]]
0870360f622a742982fc117229041450cba128ee
194
179
2024-04-24T07:54:34Z
Mrroboto
5
wikitext
text/x-wiki
Optimus is a humanoid robot from [[Tesla]].
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| video_link = https://www.youtube.com/watch?v=cpraXaw7dyc
| cost = Unknown, rumored $20k
}}
Tesla began work on the Optimus robot in 2021.
[[File:Optimus Tesla (1).jpg|none|300px|The Tesla Optimus on display|thumb]]
[[Category:Robots]]
7d6acc73a5cc062cf4c5b7ac89f85af8aafdbe60
Template talk:Infobox robot
11
36
166
2024-04-24T07:30:44Z
Mrroboto
5
Created page with "{{infobox | name = {{{name}}} | key1 = Name | value1 = {{{name}}} | key2 = Organization | value2 = {{{organization|}}} | key3 = Video | value3 = {{#if: {{{video_link|}}} | [{{..."
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Organization
| value2 = {{{organization|}}}
| key3 = Video
| value3 = {{#if: {{{video_link|}}} | [{{{video_link}}} Video] }}
| key4 = Cost
| value4 = {{{cost|}}}
| key5 = Height
| value5 = {{{height|}}}
| key6 = Weight
| value6 = {{{weight|}}}
| key7 = Lift Force
| value7 = {{{lift_force|}}}
| key8 = Battery Life
| value8 = {{{battery_life|}}}
| key8 = Purchase
| value8 = {{#if: {{{purchase_link|}}} | [{{{purchase_link}}} Link] }}
}}
Fields:
* <code>organization</code>
* <code>video_link</code>
* <code>cost</code>
* <code>height</code>
* <code>weight</code>
* <code>lift_force</code>
* <code>battery_life</code>
* <code>purchase_link</code>
b58f783908efc95ee8d904c149fc94db5a4cb350
169
166
2024-04-24T07:31:55Z
Mrroboto
5
wikitext
text/x-wiki
Fields:
* <code>organization</code>
* <code>video_link</code>
* <code>cost</code>
* <code>height</code>
* <code>weight</code>
* <code>lift_force</code>
* <code>battery_life</code>
* <code>purchase_link</code>
15786e5eda391575690876877958896df4bc8b29
Template talk:Infobox company
11
37
168
2024-04-24T07:31:42Z
Mrroboto
5
Created page with "Fields: * <code>name</code> * <code>country</code> * <code>website_link</code> * <code>robots</code>"
wikitext
text/x-wiki
Fields:
* <code>name</code>
* <code>country</code>
* <code>website_link</code>
* <code>robots</code>
52f848b4c5c83e9a0cef937520d89ac8700b0216
Tesla
0
7
171
122
2024-04-24T07:32:12Z
Mrroboto
5
wikitext
text/x-wiki
Tesla is building a humanoid robot called [[Optimus]].
{{infobox company
| name = Tesla
| country = United States
| website_link = https://www.tesla.com/
| robots = [[Optimus]]
}}
[[Category:Companies]]
f7819b06a63448554158caeb318c60621ac3e1c8
MediaWiki:Common.css
8
27
172
142
2024-04-24T07:34:24Z
Admin
1
css
text/css
/* CSS placed here will be applied to all skins */
.infobox {
border: 1px solid #aaaaaa;
background-color: #f9f9f9;
padding: 5px;
width: 300px;
font-size: 90%;
margin-top: 20px;
margin-bottom: 20px;
}
.infobox td .infobox-left {
background-color: #e0e0e0;
text-align: center;
padding: 2px 5px;
text-align: center;
background-color: #cccccc;
}
.infobox td {
padding: 2px 5px;
}
7226ebadb8dc8ec772d1c99745ed12a54d85e8c3
173
172
2024-04-24T07:34:31Z
Admin
1
css
text/css
/* CSS placed here will be applied to all skins */
.infobox {
border: 1px solid #aaaaaa;
background-color: #f9f9f9;
padding: 5px;
width: 300px;
font-size: 90%;
margin-top: 20px;
margin-bottom: 20px;
}
.infobox td .infobox-left {
background-color: #e0e0e0;
text-align: center;
padding: 2px 5px;
}
.infobox td {
padding: 2px 5px;
}
d94bac92d8b6151913f78c145149d50d400beb0c
174
173
2024-04-24T07:34:45Z
Admin
1
Undo revision 173 by [[Special:Contributions/Admin|Admin]] ([[User talk:Admin|talk]])
css
text/css
/* CSS placed here will be applied to all skins */
.infobox {
border: 1px solid #aaaaaa;
background-color: #f9f9f9;
padding: 5px;
width: 300px;
font-size: 90%;
margin-top: 20px;
margin-bottom: 20px;
}
.infobox td .infobox-left {
background-color: #e0e0e0;
text-align: center;
padding: 2px 5px;
text-align: center;
background-color: #cccccc;
}
.infobox td {
padding: 2px 5px;
}
7226ebadb8dc8ec772d1c99745ed12a54d85e8c3
175
174
2024-04-24T07:35:07Z
Admin
1
css
text/css
/* CSS placed here will be applied to all skins */
.infobox {
border: 1px solid #aaaaaa;
background-color: #f9f9f9;
padding: 5px;
width: 300px;
font-size: 90%;
margin-top: 20px;
margin-bottom: 20px;
}
.infobox th {
background-color: #e0e0e0;
padding: 2px 5px;
text-align: center;
}
.infobox td {
padding: 2px 5px;
}
ab5bbf2daca6bff8d8e124f4aea5f663ab05a507
176
175
2024-04-24T07:35:41Z
Admin
1
css
text/css
/* CSS placed here will be applied to all skins */
.infobox {
border: 1px solid #aaaaaa;
background-color: #f9f9f9;
padding: 5px;
width: 300px;
font-size: 90%;
margin-top: 20px;
margin-bottom: 20px;
}
.infobox th {
background-color: #e0e0e0;
padding: 2px 5px;
text-align: center;
}
.infobox td {
padding: 2px 5px;
}
.infobox td .infobox-left {
font-style: italic;
}
f1df76d6752a57a3c0a9bbd500fc9f237cf1687b
177
176
2024-04-24T07:36:42Z
Admin
1
css
text/css
/* CSS placed here will be applied to all skins */
.infobox {
border: 1px solid #aaaaaa;
background-color: #f9f9f9;
padding: 5px;
width: 300px;
font-size: 90%;
margin-top: 20px;
margin-bottom: 20px;
}
.infobox th {
background-color: #e0e0e0;
padding: 2px 5px;
text-align: center;
}
.infobox td {
padding: 2px 5px;
}
.infobox-left {
font-style: italic;
}
36098d154d5c7b8163de04b23dd304ea0d17fc59
Cassie
0
24
183
89
2024-04-24T07:47:41Z
Mrroboto
5
wikitext
text/x-wiki
Cassie is a bipedal robot designed by Oregon State University and licensed and built by Agility Robotics.
{{infobox robot
| name = Cassie
| organization = [[Agility]]
| video_link = https://www.youtube.com/watch?v=64hKiuJ31a4
| cost = $250,000
| height = 115 cm
| weight = 31 kg
| speed = >4 m/s (8.95 mph)
| battery_life = 5 hours
| battery_capacity = 1 kWh
| dof = 10 (5 per leg)
| number_made = ~12
| status = Retired
}}
==Development==
On February 9, 2017 Cassie was unveiled at an event at Oregon State University featuring a live demo. A YouTube video was also posted to both the OSU YouTube page and Agility Robotics YouTube page.
On September 5th, 2017 University of Michigan received the first Cassie which they named "Cassie Blue." They would later receive "Cassie Yellow."
==Firsts and World Record==
Oregon State University's DRAIL lab used a reinforcement learned model to have Cassie run a 100M dash in 24.73 seconds. The Guinness World Record required a standing start and finish, so Cassie averaged a speed of over 4 m/s. For comparison, at the time the Guinness World Record for fastest running humanoid is ASIMO with a speed of 2.5 m/s which has to be done indoors on a special leveled floor. The run was done outside at the Whyte Track and Field Center on Oregon State's campus. The record was announced in September of 2022, but the actual run was 6 months prior. Cassie and the OSU DRAIL lab have an entire page dedicated to them in the 2024 Guinness World Records book. Footage of the run can be seen [https://www.youtube.com/watch?v=DdojWYOK0Nc here].
eced16f955f7717f14459f0883b981c6e3a40efb
186
183
2024-04-24T07:50:01Z
Mrroboto
5
wikitext
text/x-wiki
Cassie is a bipedal robot designed by Oregon State University and licensed and built by Agility Robotics.
{{infobox robot
| name = Cassie
| organization = [[Agility]]
| video_link = https://www.youtube.com/watch?v=64hKiuJ31a4
| cost = $250,000
| height = 115 cm
| weight = 31 kg
| speed = >4 m/s (8.95 mph)
| battery_life = 5 hours
| battery_capacity = 1 kWh
| dof = 10 (5 per leg)
| number_made = ~12
| status = Retired
}}
==Development==
On February 9, 2017 Cassie was unveiled at an event at Oregon State University featuring a live demo. A YouTube video was also posted to both the OSU YouTube page and Agility Robotics YouTube page.
On September 5th, 2017 University of Michigan received the first Cassie which they named "Cassie Blue." They would later receive "Cassie Yellow."
[[File:Cassie.jpg|thumb]]
==Firsts and World Record==
Oregon State University's DRAIL lab used a reinforcement learned model to have Cassie run a 100M dash in 24.73 seconds. The Guinness World Record required a standing start and finish, so Cassie averaged a speed of over 4 m/s. For comparison, at the time the Guinness World Record for fastest running humanoid is ASIMO with a speed of 2.5 m/s which has to be done indoors on a special leveled floor. The run was done outside at the Whyte Track and Field Center on Oregon State's campus. The record was announced in September of 2022, but the actual run was 6 months prior. Cassie and the OSU DRAIL lab have an entire page dedicated to them in the 2024 Guinness World Records book. Footage of the run can be seen [https://www.youtube.com/watch?v=DdojWYOK0Nc here].
8ac154ab6974f0be85312efe9de03c5089394414
187
186
2024-04-24T07:50:13Z
Mrroboto
5
wikitext
text/x-wiki
Cassie is a bipedal robot designed by Oregon State University and licensed and built by Agility Robotics.
{{infobox robot
| name = Cassie
| organization = [[Agility]]
| video_link = https://www.youtube.com/watch?v=64hKiuJ31a4
| cost = $250,000
| height = 115 cm
| weight = 31 kg
| speed = >4 m/s (8.95 mph)
| battery_life = 5 hours
| battery_capacity = 1 kWh
| dof = 10 (5 per leg)
| number_made = ~12
| status = Retired
}}
[[File:Cassie.jpg|thumb]]
==Development==
On February 9, 2017 Cassie was unveiled at an event at Oregon State University featuring a live demo. A YouTube video was also posted to both the OSU YouTube page and Agility Robotics YouTube page.
On September 5th, 2017 University of Michigan received the first Cassie which they named "Cassie Blue." They would later receive "Cassie Yellow."
==Firsts and World Record==
Oregon State University's DRAIL lab used a reinforcement learned model to have Cassie run a 100M dash in 24.73 seconds. The Guinness World Record required a standing start and finish, so Cassie averaged a speed of over 4 m/s. For comparison, at the time the Guinness World Record for fastest running humanoid is ASIMO with a speed of 2.5 m/s which has to be done indoors on a special leveled floor. The run was done outside at the Whyte Track and Field Center on Oregon State's campus. The record was announced in September of 2022, but the actual run was 6 months prior. Cassie and the OSU DRAIL lab have an entire page dedicated to them in the 2024 Guinness World Records book. Footage of the run can be seen [https://www.youtube.com/watch?v=DdojWYOK0Nc here].
cc014401b4c3e47a5890517e194d56b4dec924e5
188
187
2024-04-24T07:50:26Z
Mrroboto
5
wikitext
text/x-wiki
Cassie is a bipedal robot designed by Oregon State University and licensed and built by [[Agility Robotics]].
[[File:Cassie.jpg|thumb]]
{{infobox robot
| name = Cassie
| organization = [[Agility]]
| video_link = https://www.youtube.com/watch?v=64hKiuJ31a4
| cost = $250,000
| height = 115 cm
| weight = 31 kg
| speed = >4 m/s (8.95 mph)
| battery_life = 5 hours
| battery_capacity = 1 kWh
| dof = 10 (5 per leg)
| number_made = ~12
| status = Retired
}}
==Development==
On February 9, 2017 Cassie was unveiled at an event at Oregon State University featuring a live demo. A YouTube video was also posted to both the OSU YouTube page and Agility Robotics YouTube page.
On September 5th, 2017 University of Michigan received the first Cassie which they named "Cassie Blue." They would later receive "Cassie Yellow."
==Firsts and World Record==
Oregon State University's DRAIL lab used a reinforcement learned model to have Cassie run a 100M dash in 24.73 seconds. The Guinness World Record required a standing start and finish, so Cassie averaged a speed of over 4 m/s. For comparison, at the time the Guinness World Record for fastest running humanoid is ASIMO with a speed of 2.5 m/s which has to be done indoors on a special leveled floor. The run was done outside at the Whyte Track and Field Center on Oregon State's campus. The record was announced in September of 2022, but the actual run was 6 months prior. Cassie and the OSU DRAIL lab have an entire page dedicated to them in the 2024 Guinness World Records book. Footage of the run can be seen [https://www.youtube.com/watch?v=DdojWYOK0Nc here].
3d070e6b71ed9b1faf11a4c08d11efe452d63228
190
188
2024-04-24T07:50:47Z
Mrroboto
5
wikitext
text/x-wiki
Cassie is a bipedal robot designed by Oregon State University and licensed and built by [[Agility]].
[[File:Cassie.jpg|thumb]]
{{infobox robot
| name = Cassie
| organization = [[Agility]]
| video_link = https://www.youtube.com/watch?v=64hKiuJ31a4
| cost = $250,000
| height = 115 cm
| weight = 31 kg
| speed = >4 m/s (8.95 mph)
| battery_life = 5 hours
| battery_capacity = 1 kWh
| dof = 10 (5 per leg)
| number_made = ~12
| status = Retired
}}
==Development==
On February 9, 2017 Cassie was unveiled at an event at Oregon State University featuring a live demo. A YouTube video was also posted to both the OSU YouTube page and Agility Robotics YouTube page.
On September 5th, 2017 University of Michigan received the first Cassie which they named "Cassie Blue." They would later receive "Cassie Yellow."
==Firsts and World Record==
Oregon State University's DRAIL lab used a reinforcement learned model to have Cassie run a 100M dash in 24.73 seconds. The Guinness World Record required a standing start and finish, so Cassie averaged a speed of over 4 m/s. For comparison, at the time the Guinness World Record for fastest running humanoid is ASIMO with a speed of 2.5 m/s which has to be done indoors on a special leveled floor. The run was done outside at the Whyte Track and Field Center on Oregon State's campus. The record was announced in September of 2022, but the actual run was 6 months prior. Cassie and the OSU DRAIL lab have an entire page dedicated to them in the 2024 Guinness World Records book. Footage of the run can be seen [https://www.youtube.com/watch?v=DdojWYOK0Nc here].
6a238ec32dfd5172595f23d0ea9615af3bc37401
191
190
2024-04-24T07:51:27Z
Mrroboto
5
wikitext
text/x-wiki
Cassie is a bipedal robot designed by Oregon State University and licensed and built by [[Agility]].
[[File:Cassie.jpg|right|200px|thumb]]
{{infobox robot
| name = Cassie
| organization = [[Agility]]
| video_link = https://www.youtube.com/watch?v=64hKiuJ31a4
| cost = $250,000
| height = 115 cm
| weight = 31 kg
| speed = >4 m/s (8.95 mph)
| battery_life = 5 hours
| battery_capacity = 1 kWh
| dof = 10 (5 per leg)
| number_made = ~12
| status = Retired
}}
==Development==
On February 9, 2017 Cassie was unveiled at an event at Oregon State University featuring a live demo. A YouTube video was also posted to both the OSU YouTube page and Agility Robotics YouTube page.
On September 5th, 2017 University of Michigan received the first Cassie which they named "Cassie Blue." They would later receive "Cassie Yellow."
==Firsts and World Record==
Oregon State University's DRAIL lab used a reinforcement learned model to have Cassie run a 100M dash in 24.73 seconds. The Guinness World Record required a standing start and finish, so Cassie averaged a speed of over 4 m/s. For comparison, at the time the Guinness World Record for fastest running humanoid is ASIMO with a speed of 2.5 m/s which has to be done indoors on a special leveled floor. The run was done outside at the Whyte Track and Field Center on Oregon State's campus. The record was announced in September of 2022, but the actual run was 6 months prior. Cassie and the OSU DRAIL lab have an entire page dedicated to them in the 2024 Guinness World Records book. Footage of the run can be seen [https://www.youtube.com/watch?v=DdojWYOK0Nc here].
916110f5eed226049da63b407645cb2008fc2c1e
202
191
2024-04-24T07:59:57Z
Mrroboto
5
wikitext
text/x-wiki
[[File:Cassie.jpg|right|200px|thumb]]
Cassie is a bipedal robot designed by Oregon State University and licensed and built by [[Agility]].
{{infobox robot
| name = Cassie
| organization = [[Agility]]
| video_link = https://www.youtube.com/watch?v=64hKiuJ31a4
| cost = $250,000
| height = 115 cm
| weight = 31 kg
| speed = >4 m/s (8.95 mph)
| battery_life = 5 hours
| battery_capacity = 1 kWh
| dof = 10 (5 per leg)
| number_made = ~12
| status = Retired
}}
==Development==
On February 9, 2017 Cassie was unveiled at an event at Oregon State University featuring a live demo. A YouTube video was also posted to both the OSU YouTube page and Agility Robotics YouTube page.
On September 5th, 2017 University of Michigan received the first Cassie which they named "Cassie Blue." They would later receive "Cassie Yellow."
==Firsts and World Record==
Oregon State University's DRAIL lab used a reinforcement learned model to have Cassie run a 100M dash in 24.73 seconds. The Guinness World Record required a standing start and finish, so Cassie averaged a speed of over 4 m/s. For comparison, at the time the Guinness World Record for fastest running humanoid is ASIMO with a speed of 2.5 m/s which has to be done indoors on a special leveled floor. The run was done outside at the Whyte Track and Field Center on Oregon State's campus. The record was announced in September of 2022, but the actual run was 6 months prior. Cassie and the OSU DRAIL lab have an entire page dedicated to them in the 2024 Guinness World Records book. Footage of the run can be seen [https://www.youtube.com/watch?v=DdojWYOK0Nc here].
e15af65042f9102989b6544fbbad7470c9ad177c
File:Cassie.jpg
6
38
185
2024-04-24T07:49:51Z
Mrroboto
5
wikitext
text/x-wiki
A view of the Cassie robot standing
311a30afec019916685992a076131ea91dfd145c
Agility
0
8
189
40
2024-04-24T07:50:39Z
Mrroboto
5
wikitext
text/x-wiki
Agility has built several robots. Their humanoid robot is called [[Digit]].
[[Category:Companies]]
b7c7496933b1b37a0232cd595feac0d64b1c1dd8
192
189
2024-04-24T07:52:22Z
Mrroboto
5
wikitext
text/x-wiki
Agility has built several robots. Their humanoid robot is called [[Digit]].
{{infobox company
| name = Agility
| country = United States
| website_link = https://agilityrobotics.com/
| robots = [[Cassie]], [[Digit]]
}}
[[Category:Companies]]
18dbe0ca7deb2aa4672228d807ad4133dd44b880
Main Page
0
1
193
123
2024-04-24T07:52:44Z
Mrroboto
5
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[DigitV3]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[EVE]], [[NEO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Era Robotics]]
| [[XBot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
51c42062c13c144063aab44be8cfe2166803dc5e
File:Stompy.jpg
6
39
195
2024-04-24T07:55:34Z
Mrroboto
5
wikitext
text/x-wiki
Stompy standing up
5472b34d182146bd2bda9716569528d1b3a7939e
Stompy
0
2
196
46
2024-04-24T07:56:36Z
Mrroboto
5
wikitext
text/x-wiki
[[File:Stompy.jpg|right|200px|thumb]]
{{infobox robot
| name = Stompy
| organization = [[K-Scale Labs]]
| cost = $10,000
}}
Stompy is an open-source humanoid robot developed by [[K-Scale Labs]].
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
[[Category:Robots]]
23514b97861e631b335e4c3af26ff3bd9bad453b
197
196
2024-04-24T07:56:57Z
Mrroboto
5
wikitext
text/x-wiki
[[File:Stompy.jpg|right|300px|thumb]]
{{infobox robot
| name = Stompy
| organization = [[K-Scale Labs]]
| cost = $10,000
}}
Stompy is an open-source humanoid robot developed by [[K-Scale Labs]].
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
[[Category:Robots]]
c5629ea3dd2720c376229c77117ad7afdaa42b14
198
197
2024-04-24T07:57:52Z
Mrroboto
5
wikitext
text/x-wiki
[[File:Stompy.jpg|right|300px|thumb]]
{{infobox robot
| name = Stompy
| organization = [[K-Scale Labs]]
| cost = $10,000
}}
Stompy is an open-source humanoid robot developed by [[K-Scale Labs]].
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
= Simulation =
For the latest simulation artifacts, see [https://kscale.dev/ the website].
[[Category:Robots]]
3ce03b40d040a8253e96804412f92267b0b94b26
K-Scale Labs
0
5
199
48
2024-04-24T07:58:24Z
Mrroboto
5
wikitext
text/x-wiki
[https://kscale.dev/ K-Scale Labs] is building an open-source humanoid robot called [[Stompy]].
{{infobox company
| name = K-Scale Labs
| country = United States
| website_link = https://kscale.dev/
| robots = [[Stompy]]
}}
[[Category:Companies]]
43baae20de1554a7100d463518d11ccd93034f9f
201
199
2024-04-24T07:59:44Z
Mrroboto
5
wikitext
text/x-wiki
[[File:Logo.png|right|200px|thumb]]
[https://kscale.dev/ K-Scale Labs] is building an open-source humanoid robot called [[Stompy]].
{{infobox company
| name = K-Scale Labs
| country = United States
| website_link = https://kscale.dev/
| robots = [[Stompy]]
}}
[[Category:Companies]]
2bfd4249601bb542b3fac0762535aa3816d07a4d
File:Logo.png
6
40
200
2024-04-24T07:59:02Z
Mrroboto
5
wikitext
text/x-wiki
The K-Scale Labs logo
ba44c02f5ecb30e4a9204639ba9e9fbe1d1bed2b
Building a PCB
0
41
203
2024-04-24T08:05:21Z
Mrroboto
5
Created page with "Walk-through and notes regarding how to design and ship a PCB."
wikitext
text/x-wiki
Walk-through and notes regarding how to design and ship a PCB.
b36ceecece334f4d82f0be420552f11918204ac4
204
203
2024-04-24T08:07:37Z
Mrroboto
5
wikitext
text/x-wiki
Walk-through and notes regarding how to design and ship a PCB.
[[Category: Hardware]]
[[Category: Guides]]
[[Category: Electronics]]
7f7f8a58a029adbd3a13d489a3f69713936315cb
Category:Hardware
14
42
205
2024-04-24T08:07:43Z
Mrroboto
5
Created blank page
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
206
205
2024-04-24T08:07:59Z
Mrroboto
5
wikitext
text/x-wiki
Pages related to building hardware.
766f95387b6b8e9a66a15fa7d341a630d19ef771
Category:Guides
14
43
207
2024-04-24T08:08:28Z
Mrroboto
5
Created page with "Pages designated as guides."
wikitext
text/x-wiki
Pages designated as guides.
de810d1cf8dd81ed0f0d0f99f2f40c7b98729347
Category:Electronics
14
44
208
2024-04-24T08:08:40Z
Mrroboto
5
Created page with "Pages related to dealing with electronics."
wikitext
text/x-wiki
Pages related to dealing with electronics.
611b344bd910427314f2c0b6ba39fd3d1c308d21
Main Page
0
1
209
193
2024-04-24T08:10:17Z
Mrroboto
5
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[DigitV3]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[EVE]], [[NEO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Era Robotics]]
| [[XBot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
9df948bf81e237039fd5847a889c089341c11fec
220
209
2024-04-24T08:21:00Z
Mrroboto
5
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[EVE]], [[NEO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Era Robotics]]
| [[XBot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
49045503b3bf433f08ab8827d3fe5aca5ab84a78
221
220
2024-04-24T08:21:13Z
Mrroboto
5
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Era Robotics]]
| [[XBot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
ab3ca86908b06d6faef26f8fcb5494cec1095dca
235
221
2024-04-24T09:14:22Z
Mrroboto
5
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides|Guides]]
| Category for pages which act as guides
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Era Robotics]]
| [[XBot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
ee04bb47ccb82de0f53870d06bed059cae75b3e3
237
235
2024-04-24T09:15:23Z
Mrroboto
5
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides|Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics|Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware|Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software|Software]]
| Category for pages relating to software
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Era Robotics]]
| [[XBot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
3c5cc1f27d8f48d3883d8abdf56020a79cb82490
238
237
2024-04-24T09:15:35Z
Mrroboto
5
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Era Robotics]]
| [[XBot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
338c4b364389cf4d8d4a78b5155c627387cbe867
Stompy Build Guide
0
45
210
2024-04-24T08:11:02Z
Mrroboto
5
Created page with "Build guide for constructing [[Stompy]]. [[Category: Hardware]] [[Category: Guides]] [[Category: Electronics]]"
wikitext
text/x-wiki
Build guide for constructing [[Stompy]].
[[Category: Hardware]]
[[Category: Guides]]
[[Category: Electronics]]
a08d7afa5a13fb8db1968afed18fda915a14ae5d
Stompy
0
2
211
198
2024-04-24T08:13:29Z
Mrroboto
5
wikitext
text/x-wiki
[[File:Stompy.jpg|right|300px|thumb]]
{{infobox robot
| name = Stompy
| organization = [[K-Scale Labs]]
| cost = $10,000
}}
Stompy is an open-source humanoid robot developed by [[K-Scale Labs]]. See the [[Stompy Build Guide|build guide]] for a walk-through of how to build one yourself.
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
= Simulation =
For the latest simulation artifacts, see [https://kscale.dev/ the website].
[[Category:Robots]]
cddc52a4487dd2ca873b8977afbfa8cb0322f7df
Cassie
0
24
212
202
2024-04-24T08:14:28Z
Mrroboto
5
wikitext
text/x-wiki
[[File:Cassie.jpg|right|200px|thumb]]
Cassie is a bipedal robot designed by Oregon State University and licensed and built by [[Agility]].
{{infobox robot
| name = Cassie
| organization = [[Agility]]
| video_link = https://www.youtube.com/watch?v=64hKiuJ31a4
| cost = $250,000
| height = 115 cm
| weight = 31 kg
| speed = >4 m/s (8.95 mph)
| battery_life = 5 hours
| battery_capacity = 1 kWh
| dof = 10 (5 per leg)
| number_made = ~12
| status = Retired
}}
==Development==
On February 9, 2017 Cassie was unveiled at an event at Oregon State University featuring a live demo. A YouTube video was also posted to both the OSU YouTube page and Agility Robotics YouTube page.
On September 5th, 2017 University of Michigan received the first Cassie which they named "Cassie Blue." They would later receive "Cassie Yellow."
==Firsts and World Record==
Oregon State University's DRAIL lab used a reinforcement learned model to have Cassie run a 100M dash in 24.73 seconds. The Guinness World Record required a standing start and finish, so Cassie averaged a speed of over 4 m/s. For comparison, at the time the Guinness World Record for fastest running humanoid is ASIMO with a speed of 2.5 m/s which has to be done indoors on a special leveled floor. The run was done outside at the Whyte Track and Field Center on Oregon State's campus. The record was announced in September of 2022, but the actual run was 6 months prior. Cassie and the OSU DRAIL lab have an entire page dedicated to them in the 2024 Guinness World Records book. Footage of the run can be seen [https://www.youtube.com/watch?v=DdojWYOK0Nc here].
[[Category: Robots]]
34abd9c527bfa7f653060c905b52d1f576e277cf
H1
0
3
213
75
2024-04-24T08:15:32Z
Mrroboto
5
wikitext
text/x-wiki
It is available for purchase [https://shop.unitree.com/products/unitree-h1 here].
{{infobox robot
| name = H1
| organization = [[Unitree]]
| video_link = https://www.youtube.com/watch?v=83ShvgtyFAg
| cost = $150,000
| purchase_link = https://shop.unitree.com/products/unitree-h1
}}
[[Category: Robots]]
d3f470d790d94fc2fa4a6b3611b62215d4052ed4
Optimus
0
22
214
194
2024-04-24T08:16:58Z
Mrroboto
5
wikitext
text/x-wiki
[[File:Optimus Tesla (1).jpg|right|200px|thumb]]
Optimus is a humanoid robot from [[Tesla]].
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| video_link = https://www.youtube.com/watch?v=cpraXaw7dyc
| cost = Unknown, rumored $20k
}}
Tesla began work on the Optimus robot in 2021.
[[Category:Robots]]
b35d08f8818c2b84912702c2888bf2247d48a8e8
236
214
2024-04-24T09:14:49Z
User2024
6
wikitext
text/x-wiki
[[File:Optimus Tesla (1).jpg|right|200px|thumb]]
Optimus is a humanoid robot from [[Tesla]].
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| height = 5 ft 8 in (173 cm)
| weight = 58 kg
| video_link = https://www.youtube.com/watch?v=cpraXaw7dyc
| cost = Unknown, rumored $20k
}}
Tesla began work on the Optimus robot in 2021.
[[Category:Robots]]
4ebfc66ae0ce2466c9f41ce3eb73732822bcd6ad
Sanctuary
0
9
215
39
2024-04-24T08:17:45Z
Mrroboto
5
wikitext
text/x-wiki
Sanctuary AI is a humanoid robot company. Their robot is called [[Phoenix]].
{{infobox company
| name = Sanctuary
| country = United States
| website_link = https://sanctuary.ai/
| robots = [[Phoenix]]
}}
[[Category:Companies]]
73c4775e3cfc7f680e9efb6e343f7d807c4c2146
1X
0
10
216
74
2024-04-24T08:18:46Z
Mrroboto
5
wikitext
text/x-wiki
[https://www.1x.tech/ 1X] (formerly known as Halodi Robotics) is a humanoid robotics company based in Moss, Norway. They have two robots: [[Eve]] and [[Neo]]. [[Eve]] is a wheeled robot while [[Neo]] has legs. The company is known for it's high torque BLDC motors that they developed in house. Those BLDC motors are paired with low gear ratio cable drives. Eve and Neo are designed for safe human interaction by reducing actuator inertia.
{{infobox company
| name = 1X Technologies
| country = United States
| website_link = https://www.1x.tech/
| robots = [[Eve]], [[Neo]]
}}
[[Category:Companies]]
5a8403a4c8da4adcdbece6263ec1a8be7d0dd465
Unitree
0
6
217
37
2024-04-24T08:19:27Z
Mrroboto
5
wikitext
text/x-wiki
Unitree is a company based out of China which has built a number of different types of robots.
{{infobox company
| name = Unitree
| country = China
| website_link = https://www.unitree.com/
| robots = [[H1]]
}}
[[Category:Companies]]
73d68bb511ac8eaa83cfca4764c1f399c20f781e
Physical Intelligence
0
12
218
43
2024-04-24T08:19:59Z
Mrroboto
5
wikitext
text/x-wiki
[https://physicalintelligence.company/ Physical Intelligence] is a company based in the Bay Area which is building foundation models for embodied AI.
{{infobox company
| name = Physical Intelligence
| country = United States
| website_link = https://physicalintelligence.company/
}}
[[Category:Companies]]
14c75bc8d5e6d8e6935a25d73db01c8c448760ed
Skild
0
11
219
44
2024-04-24T08:20:23Z
Mrroboto
5
wikitext
text/x-wiki
Skild is a stealth foundation model startup started by two faculty members from Carnegie Mellon University.
{{infobox company
| name = Skild
| country = United States
}}
=== Articles ===
* [https://www.theinformation.com/articles/venture-fomo-hits-robotics-as-young-startup-gets-1-5-billion-valuation Venture FOMO Hits Robotics as Young Startup Gets $1.5 Billion Valuation]
[[Category:Companies]]
880bcffd8281a04a0d81d27cab54ef044e348e99
CAN/IMU/Cameras with Jetson Orin
0
20
222
59
2024-04-24T09:01:46Z
Mrroboto
5
wikitext
text/x-wiki
The Jetson Orin is a development board from Nvidia.
=== CAN Bus ===
See [https://docs.nvidia.com/jetson/archives/r34.1/DeveloperGuide/text/HR/ControllerAreaNetworkCan.html here] for notes on configuring the CAN bus for the Jetson.
Install dependencies:
<syntaxhighlight lang="bash">
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt upgrade
sudo apt install g++ python3.11-dev
</syntaxhighlight>
Initialize the CAN bus on startup:
<syntaxhighlight lang="bash">
#!/bin/bash
# Set pinmux.
busybox devmem 0x0c303000 32 0x0000C400
busybox devmem 0x0c303008 32 0x0000C458
busybox devmem 0x0c303010 32 0x0000C400
busybox devmem 0x0c303018 32 0x0000C458
# Install modules.
modprobe can
modprobe can_raw
modprobe mttcan
# Turn off CAN.
ip link set down can0
ip link set down can1
# Set parameters.
ip link set can0 type can bitrate 1000000 dbitrate 1000000 berr-reporting on fd on loopback off
ip link set can1 type can bitrate 1000000 dbitrate 1000000 berr-reporting on fd on loopback off
# Turn on CAN.
ip link set up can0
ip link set up can1
</syntaxhighlight>
You can run this script automatically on startup by writing a service configuration to (for example) <code>/etc/systemd/system/can_setup.service</code>
<syntaxhighlight lang="text">
[Unit]
Description=Initialize CAN Interfaces
After=network.target
[Service]
Type=oneshot
ExecStart=/opt/kscale/enable_can.sh
RemainAfterExit=true
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
To enable this, run:
<syntaxhighlight lang="bash">
sudo systemctl enable can_setup
sudo systemctl start can_setup
</syntaxhighlight>
=== Cameras ===
==== Arducam IMX 219 ====
* [https://www.arducam.com/product/arducam-imx219-multi-camera-kit-for-the-nvidia-jetson-agx-orin/ Product Page]
** Shipping was pretty fast
** Order a couple backup cameras because a couple of the cameras that they shipped came busted
* [https://docs.arducam.com/Nvidia-Jetson-Camera/Nvidia-Jetson-Orin-Series/NVIDIA-Jetson-AGX-Orin/Quick-Start-Guide/ Quick start guide]
Run the installation script:
<syntaxhighlight lang="bash">
wget https://github.com/ArduCAM/MIPI_Camera/releases/download/v0.0.3/install_full.sh
chmod u+x install_full.sh
./install_full.sh -m imx219
</syntaxhighlight>
Supported kernel versions (see releases [https://github.com/ArduCAM/MIPI_Camera/releases here]):
* <code>5.10.104-tegra-35.3.1</code>
* <code>5.10.120-tegra-35.4.1</code>
Install an older kernel from [https://developer.nvidia.com/embedded/jetson-linux-archive here]. This required downgrading to Ubuntu 20.04 (only changing <code>/etc/os-version</code>).
Install dependencies:
<syntaxhighlight lang="bash">
sudo apt update
sudo apt install \
gstreamer1.0-tools \
gstreamer1.0-alsa \
gstreamer1.0-plugins-base \
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad \
gstreamer1.0-plugins-ugly \
gstreamer1.0-libav
sudo apt install
libgstreamer1.0-dev \
libgstreamer-plugins-base1.0-dev \
libgstreamer-plugins-good1.0-dev \
libgstreamer-plugins-bad1.0-dev
sudo apt install \
v4l-utils \
ffmpeg
</syntaxhighlight>
Make sure the camera shows up:
<syntaxhighlight lang="bash">
v4l2-ctl --list-formats-ext
</syntaxhighlight>
Capture a frame from the camera:
<syntaxhighlight lang="bash">
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! "video/x-raw(memory:NVMM), width=1280, height=720, framerate=60/1" ! nvvidconv ! jpegenc snapshot=TRUE ! filesink location=test.jpg
</syntaxhighlight>
Alternatively, use the following Python code:
<syntaxhighlight lang="bash">
import cv2
gst_str = (
'nvarguscamerasrc sensor-id=0 ! '
'video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)60/1 ! '
'nvvidconv flip-method=0 ! '
'video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! '
'videoconvert ! '
'video/x-raw, format=(string)BGR ! '
'appsink'
)
cap = cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)
while True:
ret, frame = cap.read()
if ret:
print(frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
cap.release()
cv2.destroyAllWindows()
</syntaxhighlight>
=== IMU ===
We're using the [https://ozzmaker.com/product/berryimu-accelerometer-gyroscope-magnetometer-barometricaltitude-sensor/ BerryIMU v3]. To use it, connect pin 3 on the Jetson to SDA and pin 5 to SCL for I2C bus 7. You can verify the connection is successful if the following command matches:
<syntaxhighlight lang="bash">
$ sudo i2cdetect -y -r 7
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- 77
</syntaxhighlight>
The equivalent command on the Raspberry Pi should use bus 1:
<syntaxhighlight lang="bash">
sudo i2cdetect -y -r 1
</syntaxhighlight>
The default addresses are:
* <code>0x6A</code>: Gyroscope and accelerometer
* <code>0x1C</code>: Magnetometer
* <code>0x77</code>: Barometer
[[Category: Hardware]]
[[Category: Electronics]]
7021fbaed1011dfc8b981f679889c12d097b78a8
225
222
2024-04-24T09:04:56Z
Mrroboto
5
wikitext
text/x-wiki
The Jetson Orin is a development board from Nvidia.
=== CAN Bus ===
See [https://docs.nvidia.com/jetson/archives/r34.1/DeveloperGuide/text/HR/ControllerAreaNetworkCan.html here] for notes on configuring the CAN bus for the Jetson.
[[File:Can bus connections 2.png|none|200px|thumb]]
Install dependencies:
<syntaxhighlight lang="bash">
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt upgrade
sudo apt install g++ python3.11-dev
</syntaxhighlight>
Initialize the CAN bus on startup:
<syntaxhighlight lang="bash">
#!/bin/bash
# Set pinmux.
busybox devmem 0x0c303000 32 0x0000C400
busybox devmem 0x0c303008 32 0x0000C458
busybox devmem 0x0c303010 32 0x0000C400
busybox devmem 0x0c303018 32 0x0000C458
# Install modules.
modprobe can
modprobe can_raw
modprobe mttcan
# Turn off CAN.
ip link set down can0
ip link set down can1
# Set parameters.
ip link set can0 type can bitrate 1000000 dbitrate 1000000 berr-reporting on fd on loopback off
ip link set can1 type can bitrate 1000000 dbitrate 1000000 berr-reporting on fd on loopback off
# Turn on CAN.
ip link set up can0
ip link set up can1
</syntaxhighlight>
You can run this script automatically on startup by writing a service configuration to (for example) <code>/etc/systemd/system/can_setup.service</code>
<syntaxhighlight lang="text">
[Unit]
Description=Initialize CAN Interfaces
After=network.target
[Service]
Type=oneshot
ExecStart=/opt/kscale/enable_can.sh
RemainAfterExit=true
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
To enable this, run:
<syntaxhighlight lang="bash">
sudo systemctl enable can_setup
sudo systemctl start can_setup
</syntaxhighlight>
=== Cameras ===
==== Arducam IMX 219 ====
* [https://www.arducam.com/product/arducam-imx219-multi-camera-kit-for-the-nvidia-jetson-agx-orin/ Product Page]
** Shipping was pretty fast
** Order a couple backup cameras because a couple of the cameras that they shipped came busted
* [https://docs.arducam.com/Nvidia-Jetson-Camera/Nvidia-Jetson-Orin-Series/NVIDIA-Jetson-AGX-Orin/Quick-Start-Guide/ Quick start guide]
Run the installation script:
<syntaxhighlight lang="bash">
wget https://github.com/ArduCAM/MIPI_Camera/releases/download/v0.0.3/install_full.sh
chmod u+x install_full.sh
./install_full.sh -m imx219
</syntaxhighlight>
Supported kernel versions (see releases [https://github.com/ArduCAM/MIPI_Camera/releases here]):
* <code>5.10.104-tegra-35.3.1</code>
* <code>5.10.120-tegra-35.4.1</code>
Install an older kernel from [https://developer.nvidia.com/embedded/jetson-linux-archive here]. This required downgrading to Ubuntu 20.04 (only changing <code>/etc/os-version</code>).
Install dependencies:
<syntaxhighlight lang="bash">
sudo apt update
sudo apt install \
gstreamer1.0-tools \
gstreamer1.0-alsa \
gstreamer1.0-plugins-base \
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad \
gstreamer1.0-plugins-ugly \
gstreamer1.0-libav
sudo apt install
libgstreamer1.0-dev \
libgstreamer-plugins-base1.0-dev \
libgstreamer-plugins-good1.0-dev \
libgstreamer-plugins-bad1.0-dev
sudo apt install \
v4l-utils \
ffmpeg
</syntaxhighlight>
Make sure the camera shows up:
<syntaxhighlight lang="bash">
v4l2-ctl --list-formats-ext
</syntaxhighlight>
Capture a frame from the camera:
<syntaxhighlight lang="bash">
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! "video/x-raw(memory:NVMM), width=1280, height=720, framerate=60/1" ! nvvidconv ! jpegenc snapshot=TRUE ! filesink location=test.jpg
</syntaxhighlight>
Alternatively, use the following Python code:
<syntaxhighlight lang="bash">
import cv2
gst_str = (
'nvarguscamerasrc sensor-id=0 ! '
'video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)60/1 ! '
'nvvidconv flip-method=0 ! '
'video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! '
'videoconvert ! '
'video/x-raw, format=(string)BGR ! '
'appsink'
)
cap = cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)
while True:
ret, frame = cap.read()
if ret:
print(frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
cap.release()
cv2.destroyAllWindows()
</syntaxhighlight>
=== IMU ===
We're using the [https://ozzmaker.com/product/berryimu-accelerometer-gyroscope-magnetometer-barometricaltitude-sensor/ BerryIMU v3]. To use it, connect pin 3 on the Jetson to SDA and pin 5 to SCL for I2C bus 7. You can verify the connection is successful if the following command matches:
<syntaxhighlight lang="bash">
$ sudo i2cdetect -y -r 7
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- 77
</syntaxhighlight>
The equivalent command on the Raspberry Pi should use bus 1:
<syntaxhighlight lang="bash">
sudo i2cdetect -y -r 1
</syntaxhighlight>
The default addresses are:
* <code>0x6A</code>: Gyroscope and accelerometer
* <code>0x1C</code>: Magnetometer
* <code>0x77</code>: Barometer
[[Category: Hardware]]
[[Category: Electronics]]
51030b57b98b338a704c3f6daae5082cb8b122c3
File:Can bus connections.webp
6
46
223
2024-04-24T09:02:43Z
Mrroboto
5
wikitext
text/x-wiki
Image of the correctly-connected CAN busses
32b09284eeba2cda52c2fa7a0b317a199e33897c
File:Can bus connections 2.png
6
47
224
2024-04-24T09:04:28Z
Mrroboto
5
wikitext
text/x-wiki
Image of the CAN bus connections on the Jetson
c2dc41b9f357149d1858ca42a66c4455f810a152
File:Project aria extrinsics.png
6
48
226
2024-04-24T09:06:07Z
Mrroboto
5
wikitext
text/x-wiki
Diagram showing the Project Aria extrinsics
04e5a7eaac48013ebad9eae62371b20dc0622dbc
Project Aria
0
21
227
64
2024-04-24T09:06:10Z
Mrroboto
5
wikitext
text/x-wiki
AR glasses for data capture from Meta.
=== References ===
==== Links ====
* [https://www.projectaria.com/ Website]
* [https://docs.ego-exo4d-data.org/ Ego4D Documentation]
* [https://facebookresearch.github.io/projectaria_tools/docs/intro Project Aria Documentation]
** [https://facebookresearch.github.io/projectaria_tools/docs/data_formats/mps/mps_trajectory Specific page regarding trajectories]
==== Datasets ====
* [https://www.projectaria.com/datasets/apd/ APD Dataset]
=== Extrinsics ===
Here is a diagram showing the extrinsics of the various cameras on the Project Aria headset.
[[File:Project aria extrinsics.png|thumb]]
2e546dbc3c7780e4f8480b2e83e53f078f31fe95
228
227
2024-04-24T09:06:20Z
Mrroboto
5
wikitext
text/x-wiki
AR glasses for data capture from Meta.
=== References ===
==== Links ====
* [https://www.projectaria.com/ Website]
* [https://docs.ego-exo4d-data.org/ Ego4D Documentation]
* [https://facebookresearch.github.io/projectaria_tools/docs/intro Project Aria Documentation]
** [https://facebookresearch.github.io/projectaria_tools/docs/data_formats/mps/mps_trajectory Specific page regarding trajectories]
==== Datasets ====
* [https://www.projectaria.com/datasets/apd/ APD Dataset]
=== Extrinsics ===
Here is a diagram showing the extrinsics of the various cameras on the Project Aria headset.
[[File:Project aria extrinsics.png|none|thumb]]
1838dd0211ffcc31ab2851c982c90f64b2da43d0
229
228
2024-04-24T09:06:28Z
Mrroboto
5
wikitext
text/x-wiki
AR glasses for data capture from Meta.
=== References ===
==== Links ====
* [https://www.projectaria.com/ Website]
* [https://docs.ego-exo4d-data.org/ Ego4D Documentation]
* [https://facebookresearch.github.io/projectaria_tools/docs/intro Project Aria Documentation]
** [https://facebookresearch.github.io/projectaria_tools/docs/data_formats/mps/mps_trajectory Specific page regarding trajectories]
==== Datasets ====
* [https://www.projectaria.com/datasets/apd/ APD Dataset]
=== Extrinsics ===
Here is a diagram showing the extrinsics of the various cameras on the Project Aria headset.
[[File:Project aria extrinsics.png|none|400px|thumb]]
6581aaecf783619297e517f9505b51a22722e639
Serial Peripheral Interface (SPI)
0
49
230
2024-04-24T09:08:53Z
Mrroboto
5
Created page with "Serial Peripheral Interface (SPI) is commonly used for connecting to peripheral devices. === Conventions === * <code>CS</code> is Chip Select ** On Raspberry Pi, this is <co..."
wikitext
text/x-wiki
Serial Peripheral Interface (SPI) is commonly used for connecting to peripheral devices.
=== Conventions ===
* <code>CS</code> is Chip Select
** On Raspberry Pi, this is <code>CE0</code> or <code>CE1</code>
** This is a digital signal that tells the slave device to listen to the master
* <code>DC</code> is Data/Command
** This is a digital signal that tells the slave device whether the data on the <code>MOSI</code> line is a command or data
* <code>SDA</code> is data line
** Also called <code>MOSI</code> (Master Out Slave In) or <code>DIN</code> (Data In)
** This is the line that the master sends data to the slave
* <code>SCL</code> is clock line
** Also called <code>CLK</code> or <code>SCLK</code> (Serial Clock)
** This is the line that the master uses to send clock pulses to the slave
* <code>RST</code> is reset
** This is a digital signal that resets the slave device
e7227546fb1ca6b3861518cf846bc4912ea61391
231
230
2024-04-24T09:09:44Z
Mrroboto
5
wikitext
text/x-wiki
Serial Peripheral Interface (SPI) is commonly used for connecting to peripheral devices. A commonly used alternative is [[Inter-Integrated Circuit (I2C)]].
=== Conventions ===
* <code>CS</code> is Chip Select
** On Raspberry Pi, this is <code>CE0</code> or <code>CE1</code>
** This is a digital signal that tells the slave device to listen to the master
* <code>DC</code> is Data/Command
** This is a digital signal that tells the slave device whether the data on the <code>MOSI</code> line is a command or data
* <code>SDA</code> is data line
** Also called <code>MOSI</code> (Master Out Slave In) or <code>DIN</code> (Data In)
** This is the line that the master sends data to the slave
* <code>SCL</code> is clock line
** Also called <code>CLK</code> or <code>SCLK</code> (Serial Clock)
** This is the line that the master uses to send clock pulses to the slave
* <code>RST</code> is reset
** This is a digital signal that resets the slave device
[[Category: Communication]]
bff26dc83f2c170b69f533a368651e6349ef9790
Inter-Integrated Circuit (I2C)
0
50
232
2024-04-24T09:11:22Z
Mrroboto
5
Created page with "=== Characteristics === * Bi-directional * Widely used protocol for short-distance communication * <code>SDA</code> and <code>SCL</code> ** Both are pulled high Category:..."
wikitext
text/x-wiki
=== Characteristics ===
* Bi-directional
* Widely used protocol for short-distance communication
* <code>SDA</code> and <code>SCL</code>
** Both are pulled high
[[Category: Communication]]
739be617979997d5eccf033ae317abb2ed9b6696
233
232
2024-04-24T09:11:57Z
Mrroboto
5
wikitext
text/x-wiki
Inter-Integrated Circuit (I2C) is commonly used for connecting to peripheral devices. A commonly used alternative is [[Serial Peripheral Interface (SPI)]].
=== Characteristics ===
* Bi-directional
* Widely used protocol for short-distance communication
* <code>SDA</code> and <code>SCL</code>
** Both are pulled high
[[Category: Communication]]
12297408c799ce1d58e4cfdfff520a06042fb29b
Category:Communication
14
51
234
2024-04-24T09:13:27Z
Mrroboto
5
Created page with "Pages relating to various communication protocols."
wikitext
text/x-wiki
Pages relating to various communication protocols.
12975d0a5247d36763f2e885c4ad0ec206a88e07
Category:Software
14
52
239
2024-04-24T09:15:48Z
Mrroboto
5
Created page with "Pages which have something to do with software."
wikitext
text/x-wiki
Pages which have something to do with software.
240de77297915a3a3e5c6fd7fe9a691cb7984478
Learning algorithms
0
32
240
132
2024-04-24T09:15:59Z
Mrroboto
5
wikitext
text/x-wiki
= Learning algorithms =
Learning algorithms allow to train humanoids to perform different skills such as manipulation or locomotion. Below is an overview of general approaches to training machine learning models for humanoid robots.
== Physics engines ==
Physics engines are software libraries designed to simulate physical systems in a virtual environment. They are crucial in a variety of fields such as video games, animation, robotics, and engineering simulations. These engines handle the mathematics involved in simulating physical processes like motion, collisions, and fluid dynamics.
===PhysX===
PhysX is a physics engine by NVIDIA used primarily for video games and real-time simulations. It supports rigid body dynamics, cloth simulation, and particle effects, enhancing realism and interactivity in 3D environments.
===Mujoco===
MuJoCo (Multi-Joint dynamics with Contact) is a physics engine designed for research in robotics and biomechanics. It's known for its speed, accuracy, and ease of use, making it popular for simulating complex systems with robotics and articulated structures.
==Simulators==
===[[Isaac Sim]]===
===[[VSim]]===
==Training frameworks==
Popular training frameworks
===Isaac Gym===
Isaac Gym is NVIDIA's robotics simulation tool, part of the Isaac SDK. It leverages GPU acceleration to enable the simulation of thousands of robot bodies simultaneously, making it highly efficient for training machine learning models in robotics. It's designed to streamline robotics applications, focusing on reinforcement learning in a virtual environment.
===Gymnasium===
Gymnasium is an open-source toolkit for developing and comparing reinforcement learning algorithms. Originally developed by OpenAI as "Gym," it provides a standardized set of environments (like Atari games, robotic simulations, etc.) to test and benchmark AI algorithms. It's widely used in the AI research community to foster innovation and replication in RL studies.
== Training algorithms ==
===[[Imitation learning]]===
Imitation Learning is a technique where models learn to perform tasks by mimicking expert behaviors. This approach is often used when defining explicit reward functions is challenging. It accelerates learning by using pre-collected datasets of expert demonstrations, reducing the need for trial-and-error in initial learning phases.
===[[Reinforcement Learning]]===
Reinforcement Learning involves agents learning to make decisions by interacting with an environment to maximize cumulative rewards. It's foundational in fields where sequential decision-making is crucial, like gaming, autonomous vehicles, and robotics. RL uses methods like Q-learning and policy gradient to iteratively improve agent performance based on feedback from the environment.
[[Category: Software]]
a3b43db2638b0e61cc57b04ad62aab6a73999c1f
Phoenix
0
53
241
2024-04-24T10:05:25Z
User2024
6
Created page with "Phoenix is a humanoid robot from [[[https://sanctuary.ai/ Sanctuary AI]]]. {{infobox robot | name = Phoenix | organization = [[Sanctuary AI]] | height = 170 cm | weight = 70..."
wikitext
text/x-wiki
Phoenix is a humanoid robot from [[[https://sanctuary.ai/ Sanctuary AI]]].
{{infobox robot
| name = Phoenix
| organization = [[Sanctuary AI]]
| height = 170 cm
| weight = 70 kg
| two_hand_payload = 25
}}
[[Category:Robots]]
b17b68e2833f6b769574646edc48589265341950
242
241
2024-04-24T10:06:41Z
User2024
6
wikitext
text/x-wiki
Phoenix is a humanoid robot from [https://sanctuary.ai/ Sanctuary AI].
{{infobox robot
| name = Phoenix
| organization = Sanctuary AI
| height = 170 cm
| weight = 70 kg
| two_hand_payload = 25
}}
[[Category:Robots]]
dcd3c82313aecda5134d1f9e3d2b27e54783dbd5
243
242
2024-04-24T10:11:29Z
User2024
6
wikitext
text/x-wiki
Phoenix is a humanoid robot from [https://sanctuary.ai/ Sanctuary AI].
{{infobox robot
| name = Phoenix
| organization = Sanctuary AI
| height = 5 ft 7 9n (170 cm)
| weight = 70 kg (155 lbs)
| two_hand_payload = 25
| video = https://youtube.com/watch?v=FH3zbUSMAAU
}}
[[Category:Robots]]
be293dfe2e105c119c1bb10dab2af8e0c4a3ad57
244
243
2024-04-24T10:12:52Z
User2024
6
wikitext
text/x-wiki
Phoenix is a humanoid robot from [https://sanctuary.ai/ Sanctuary AI].
{{infobox robot
| name = Phoenix
| organization = Sanctuary AI
| height = 5 ft 7 9n (170 cm)
| weight = 70 kg (155 lbs)
| two_hand_payload = 25
| video_link = https://youtube.com/watch?v=FH3zbUSMAAU
}}
[[Category:Robots]]
b928963d323b97d5bddd063629cc3e04cc134186
247
244
2024-04-24T15:23:38Z
User2024
6
wikitext
text/x-wiki
Phoenix is a humanoid robot from [[Sanctuary AI]].
{{infobox robot
| name = Phoenix
| organization = Sanctuary AI
| height = 5 ft 7 9n (170 cm)
| weight = 70 kg (155 lbs)
| two_hand_payload = 25
| video_link = https://youtube.com/watch?v=FH3zbUSMAAU
}}
[[Category:Robots]]
d7fb60f1f0f3f3734e1afcdcab51747a7bf4343b
252
247
2024-04-24T15:50:21Z
185.187.168.151
0
wikitext
text/x-wiki
Phoenix is a humanoid robot from [[Sanctuary AI]].
{{infobox robot
| name = Phoenix
| organization = Sanctuary AI
| height = 5 ft 7 in (170 cm)
| weight = 70 kg (155 lbs)
| two_hand_payload = 25
| video_link = https://youtube.com/watch?v=FH3zbUSMAAU
}}
[[Category:Robots]]
f9bb8e1027b5610ce1df912a7dbc12aabf463221
Eve
0
54
245
2024-04-24T15:17:48Z
User2024
6
Created page with "EVE is a humanoid robot from [[1X]]. {{infobox robot | name = EVE | organization = [[1X]] | height = 186 cm | weight = 86 kg | video_link = https://www.youtube.com/watch?v=20..."
wikitext
text/x-wiki
EVE is a humanoid robot from [[1X]].
{{infobox robot
| name = EVE
| organization = [[1X]]
| height = 186 cm
| weight = 86 kg
| video_link = https://www.youtube.com/watch?v=20GHG-R9eFI
}}
[[Category:Robots]]
811f8d79ffd2070e98821cb7539930a08ccffe60
Neo
0
55
246
2024-04-24T15:22:18Z
User2024
6
Created page with "NEO is a humanoid robot from [[1X]]. {{infobox robot | name = NEO | organization = [[1X]] | height = 165 cm | weight = 30 kg }} [[Category:Robots]]"
wikitext
text/x-wiki
NEO is a humanoid robot from [[1X]].
{{infobox robot
| name = NEO
| organization = [[1X]]
| height = 165 cm
| weight = 30 kg
}}
[[Category:Robots]]
3e01cd8490a07d14784346d9542e6eec15344f0f
Sanctuary AI
0
56
248
2024-04-24T15:27:09Z
User2024
6
Created page with "Sanctuary AI is building a humanoid robot called [[Phoenix]]. {{infobox company | name = Sanctuary AI | country = Canada | website_link = https://sanctuary.ai/ | robots = P..."
wikitext
text/x-wiki
Sanctuary AI is building a humanoid robot called [[Phoenix]].
{{infobox company
| name = Sanctuary AI
| country = Canada
| website_link = https://sanctuary.ai/
| robots = [[Phoenix]]
}}
[[Category:Companies]]
1de4bb8ad224b818e34d223a793b1b341413af88
GR-1
0
57
249
2024-04-24T15:30:39Z
User2024
6
Created page with "GR-1 is a humanoid robot from [[Fourier Intelligence]]. {{infobox robot | name = GR-1 | organization = [[1X]] | height = 165 cm | weight = 55 kg | cost = USD 149,999 }} ..."
wikitext
text/x-wiki
GR-1 is a humanoid robot from [[Fourier Intelligence]].
{{infobox robot
| name = GR-1
| organization = [[1X]]
| height = 165 cm
| weight = 55 kg
| cost = USD 149,999
}}
[[Category:Robots]]
16292cb2412b0a3148ff10ef51a6d19d398be087
250
249
2024-04-24T15:31:03Z
User2024
6
wikitext
text/x-wiki
GR-1 is a humanoid robot from [[Fourier Intelligence]].
{{infobox robot
| name = GR-1
| organization = [[Fourier Intelligence]]
| height = 165 cm
| weight = 55 kg
| cost = USD 149,999
}}
[[Category:Robots]]
dc14560d93557db867753f3361f1471b72613f32
Fourier Intelligence
0
58
251
2024-04-24T15:31:36Z
User2024
6
Created page with "Fourier Intelligence is building a humanoid robot called [[GR-1]]. {{infobox company | name = Fourier Intelligence | country = China | website_link = https://robots.fourierin..."
wikitext
text/x-wiki
Fourier Intelligence is building a humanoid robot called [[GR-1]].
{{infobox company
| name = Fourier Intelligence
| country = China
| website_link = https://robots.fourierintelligence.com/
| robots = [[GR-1]]
}}
[[Category:Companies]]
313af22cfa6cfa59c382767f170034b31b7fbfab
Cassie
0
24
253
212
2024-04-24T15:51:30Z
185.187.168.151
0
wikitext
text/x-wiki
[[File:Cassie.jpg|right|200px|thumb]]
Cassie is a bipedal robot designed by Oregon State University and licensed and built by [[Agility]].
{{infobox robot
| name = Cassie
| organization = [[Agility]]
| video_link = https://www.youtube.com/watch?v=64hKiuJ31a4
| cost = USD 250,000
| height = 115 cm
| weight = 31 kg
| speed = >4 m/s (8.95 mph)
| battery_life = 5 hours
| battery_capacity = 1 kWh
| dof = 10 (5 per leg)
| number_made = ~12
| status = Retired
}}
==Development==
On February 9, 2017 Cassie was unveiled at an event at Oregon State University featuring a live demo. A YouTube video was also posted to both the OSU YouTube page and Agility Robotics YouTube page.
On September 5th, 2017 University of Michigan received the first Cassie which they named "Cassie Blue." They would later receive "Cassie Yellow."
==Firsts and World Record==
Oregon State University's DRAIL lab used a reinforcement learned model to have Cassie run a 100M dash in 24.73 seconds. The Guinness World Record required a standing start and finish, so Cassie averaged a speed of over 4 m/s. For comparison, at the time the Guinness World Record for fastest running humanoid is ASIMO with a speed of 2.5 m/s which has to be done indoors on a special leveled floor. The run was done outside at the Whyte Track and Field Center on Oregon State's campus. The record was announced in September of 2022, but the actual run was 6 months prior. Cassie and the OSU DRAIL lab have an entire page dedicated to them in the 2024 Guinness World Records book. Footage of the run can be seen [https://www.youtube.com/watch?v=DdojWYOK0Nc here].
[[Category: Robots]]
630f71425d577b499192d0f42c5108b74fa72597
H1
0
3
254
213
2024-04-24T15:51:41Z
185.187.168.151
0
wikitext
text/x-wiki
It is available for purchase [https://shop.unitree.com/products/unitree-h1 here].
{{infobox robot
| name = H1
| organization = [[Unitree]]
| video_link = https://www.youtube.com/watch?v=83ShvgtyFAg
| cost = USD 150,000
| purchase_link = https://shop.unitree.com/products/unitree-h1
}}
[[Category: Robots]]
7d927bc59601d9914890744e80c136cdf5675451
Main Page
0
1
255
238
2024-04-24T15:53:27Z
185.187.168.151
0
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Era Robotics]]
| [[XBot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
6c6e462d5b14cc5f7afe2b1d21fe29df3bea59d6
259
255
2024-04-24T18:09:50Z
136.62.52.52
0
/* Getting Started */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Era Robotics]]
| [[XBot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|}
61f03846868b7c0dad8249eaaca3f5188123251a
264
259
2024-04-24T22:21:30Z
Modeless
7
Add Booster Robotics
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Era Robotics]]
| [[XBot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|}
29a34f03af64a1a6997c4e3a499e3eb70af19ef3
267
264
2024-04-24T22:31:53Z
Modeless
7
Era Robotics -> Robotera
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]] [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|}
80a89f569412da0552f58b9f4a896ce52fb62254
268
267
2024-04-24T22:32:13Z
Modeless
7
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|}
851b9311a06e5aaedc5d39878ea59060ca8a2e58
269
268
2024-04-24T22:33:39Z
Modeless
7
Agibot
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|}
69a263b2f1db30e8f5ec98f7e5106cd1d7e5b31a
274
269
2024-04-24T22:37:07Z
Modeless
7
LimX Dynamics CL-1
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|}
a0a57979d475a7941a70b01a25966648622c7528
276
274
2024-04-24T22:40:44Z
Modeless
7
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|}
7c0b16d600c19b25098996214469dc1831edd5a0
278
276
2024-04-24T22:45:03Z
AnthroDev
8
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|}
9a091e9e5fc45585f8840bbb0abf74aa2c5d5414
279
278
2024-04-24T22:46:15Z
Modeless
7
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|}
33f7756b34024f686013655dc484699efaec7c1f
301
279
2024-04-24T23:56:55Z
141.156.214.27
0
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Proxy]]
|
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|}
26d342c3471d324688849993cc926ca10d2c8fcf
302
301
2024-04-24T23:57:44Z
141.156.214.27
0
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Proxy]]
|
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|}
ef8ca9748c48057fc6de39efa44aa739ca5bc338
Contributing
0
59
256
2024-04-24T15:57:34Z
185.187.168.151
0
Created page with "=== Adding an Infobox === There are a couple infobox templates which we use: * [[Template:Infobox_company]] * [[Template:Infobox_robot]] In both, all of the fields are opti..."
wikitext
text/x-wiki
=== Adding an Infobox ===
There are a couple infobox templates which we use:
* [[Template:Infobox_company]]
* [[Template:Infobox_robot]]
In both, all of the fields are optional, except for <name>.
To add a company infobox, use:
<syntaxhighlight lang="text">
{{infobox company
| name = Name
| country = Country
| website_link = https://link.com/
| robots = [[Stompy]]
}}
</syntaxhighlight>
To add a robot infobox, use:
<syntaxhighlight lang="text">
{{infobox
| name = Name
| organization = Company
| video_link = https://youtube.com/
| cost = USD 1
| height = 10 ft
| weight = 100 kg
| speed = 1 m/s
| lift_force = 10 lb
| battery_life = 5 hr
| battery_capacity = 1 mWh
| purchase_link = https://buy.com
| number_made = 10
| dof = 10
| status = Available
}}
</syntaxhighlight>
c6d224332ee1a97b5ce52686379d201eb6d48413
257
256
2024-04-24T15:57:45Z
185.187.168.151
0
wikitext
text/x-wiki
=== Adding an Infobox ===
There are a couple infobox templates which we use:
* [[Template:Infobox_company]]
* [[Template:Infobox_robot]]
In both, all of the fields are optional, except for <name>.
To add a company infobox, use:
<syntaxhighlight lang="text">
{{infobox company
| name = Name
| country = Country
| website_link = https://link.com/
| robots = [[Stompy]]
}}
</syntaxhighlight>
To add a robot infobox, use:
<syntaxhighlight lang="text">
{{infobox robot
| name = Name
| organization = Company
| video_link = https://youtube.com/
| cost = USD 1
| height = 10 ft
| weight = 100 kg
| speed = 1 m/s
| lift_force = 10 lb
| battery_life = 5 hr
| battery_capacity = 1 mWh
| purchase_link = https://buy.com
| number_made = 10
| dof = 10
| status = Available
}}
</syntaxhighlight>
1d88590bc979105f3b3eaa2af51e40944613ec95
Stompy
0
2
258
211
2024-04-24T16:26:50Z
185.187.168.151
0
wikitext
text/x-wiki
[[File:Stompy.jpg|right|300px|thumb]]
{{infobox robot
| name = Stompy
| organization = [[K-Scale Labs]]
| cost = USD 10,000
}}
Stompy is an open-source humanoid robot developed by [[K-Scale Labs]]. See the [[Stompy Build Guide|build guide]] for a walk-through of how to build one yourself.
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
= Simulation =
For the latest simulation artifacts, see [https://kscale.dev/ the website].
[[Category:Robots]]
d3066bec124eb159c607c71d55cdfa0aaa3076d4
Category:Teleop
14
60
260
2024-04-24T18:19:39Z
136.62.52.52
0
Created page with "= Teleop = == whole-body == In whole-body teleop, the user controls a hardware replica of the robot directly. The real robot directly mimcis the pose of the replica robot...."
wikitext
text/x-wiki
= Teleop =
== whole-body ==
In whole-body teleop, the user controls a hardware replica of the robot directly. The real robot directly mimcis the pose of the replica robot.
- https://mobile-aloha.github.io/
- https://www.youtube.com/watch?v=PFw5hwNVhbA
== VR teleop ==
In VR teleop the user controls a simulated version of the robot. The robot and VR headset are usually on the same LAN and communication is done via something akin to gRPC.
- https://github.com/Improbable-AI/VisionProTeleop
- https://github.com/fazildgr8/VR_communication_mujoco200
- https://github.com/pollen-robotics/reachy2021-unity-package
- https://freetale.medium.com/unity-grpc-in-2023-98b739cb115
== Tools ==
- Unity
- gRPC
- WebRTC
-
[[Category: Teleop]]
f9a7b51373b6814e5187c1ac27603cf88d6a5f1b
261
260
2024-04-24T18:32:42Z
136.62.52.52
0
wikitext
text/x-wiki
= Teleop =
Teleoperation is the art of controlling a robot from a distance (prefix "tele-" comes from the Ancient Greek word tēle, which means "far off, at a distance, far away, far from").
== Latency ==
== Whole Body Teleop ==
In whole-body teleop, the user controls a hardware replica of the robot directly. The real robot directly mimcis the pose of the replica robot.
* https://mobile-aloha.github.io/
* https://www.youtube.com/watch?v=PFw5hwNVhbA
== VR Teleop ==
In VR teleop the user controls a simulated version of the robot. The robot and VR headset are usually on the same LAN and communication is done via gRPC or WebRTC.
* https://github.com/Improbable-AI/VisionProTeleop
* https://github.com/fazildgr8/VR_communication_mujoco200
* https://github.com/pollen-robotics/reachy2021-unity-package
* https://freetale.medium.com/unity-grpc-in-2023-98b739cb115
== Controller Teleop ==
In controlelr teleop, the user uses joysticks, keyborads, and other controllers to control the robot. The buttons map to hardcoded behaviors on the robot.
* https://github.com/ros-teleop
== Latency ==
* https://ennerf.github.io/2016/09/20/A-Practical-Look-at-Latency-in-Robotics-The-Importance-of-Metrics-and-Operating-Systems.html
* https://link.springer.com/article/10.1007/s10846-022-01749-3
* https://twitter.com/watneyrobotics/status/1769058250731999591
[[Category: Teleop]]
cd133b73fc7e23a975065d60670fc0c3917ef11a
262
261
2024-04-24T18:37:27Z
136.62.52.52
0
wikitext
text/x-wiki
= Teleop =
Teleoperation is the art of controlling a robot from a distance (prefix "tele-" comes from the Ancient Greek word tēle, which means "far off, at a distance, far away, far from").
== Whole Body Teleop ==
In whole-body teleop, the user controls a hardware replica of the robot directly. The real robot directly mimcis the pose of the replica robot.
* https://mobile-aloha.github.io/
* https://www.youtube.com/watch?v=PFw5hwNVhbA
* https://x.com/haoshu_fang/status/1707434624413306955
* https://x.com/aditya_oberai/status/1762637503495033171
== VR Teleop ==
In VR teleop the user controls a simulated version of the robot. The robot and VR headset are usually on the same LAN and communication is done via gRPC or WebRTC.
* https://github.com/Improbable-AI/VisionProTeleop
* https://github.com/fazildgr8/VR_communication_mujoco200
* https://github.com/pollen-robotics/reachy2021-unity-package
* https://freetale.medium.com/unity-grpc-in-2023-98b739cb115
* https://x.com/AndreTI/status/1780665435999924343
* https://holo-dex.github.io/
== Controller Teleop ==
In controlelr teleop, the user uses joysticks, keyborads, and other controllers to control the robot. The buttons map to hardcoded behaviors on the robot.
* https://github.com/ros-teleop
* https://x.com/ShivinDass/status/1606156894271197184
== Latency ==
* https://ennerf.github.io/2016/09/20/A-Practical-Look-at-Latency-in-Robotics-The-Importance-of-Metrics-and-Operating-Systems.html
* https://link.springer.com/article/10.1007/s10846-022-01749-3
* https://twitter.com/watneyrobotics/status/1769058250731999591
[[Category: Teleop]]
224ad1fc0bfb4af1a05bd89817dca10dea6b99b8
297
262
2024-04-24T23:55:02Z
141.156.214.27
0
/* VR Teleop */
wikitext
text/x-wiki
= Teleop =
Teleoperation is the art of controlling a robot from a distance (prefix "tele-" comes from the Ancient Greek word tēle, which means "far off, at a distance, far away, far from").
== Whole Body Teleop ==
In whole-body teleop, the user controls a hardware replica of the robot directly. The real robot directly mimcis the pose of the replica robot.
* https://mobile-aloha.github.io/
* https://www.youtube.com/watch?v=PFw5hwNVhbA
* https://x.com/haoshu_fang/status/1707434624413306955
* https://x.com/aditya_oberai/status/1762637503495033171
== VR Teleop ==
In VR teleop the user controls a simulated version of the robot. The robot and VR headset are usually on the same LAN and communication is done via gRPC or WebRTC.
* https://github.com/Improbable-AI/VisionProTeleop
* https://github.com/fazildgr8/VR_communication_mujoco200
* https://github.com/pollen-robotics/reachy2021-unity-package
* https://freetale.medium.com/unity-grpc-in-2023-98b739cb115
* https://x.com/AndreTI/status/1780665435999924343
* https://holo-dex.github.io/
* https://what-is-proxy.com
== Controller Teleop ==
In controlelr teleop, the user uses joysticks, keyborads, and other controllers to control the robot. The buttons map to hardcoded behaviors on the robot.
* https://github.com/ros-teleop
* https://x.com/ShivinDass/status/1606156894271197184
== Latency ==
* https://ennerf.github.io/2016/09/20/A-Practical-Look-at-Latency-in-Robotics-The-Importance-of-Metrics-and-Operating-Systems.html
* https://link.springer.com/article/10.1007/s10846-022-01749-3
* https://twitter.com/watneyrobotics/status/1769058250731999591
[[Category: Teleop]]
5ce453d38dd38d5e509cbe37e62219bc461ef0b4
298
297
2024-04-24T23:55:16Z
141.156.214.27
0
/* Controller Teleop */
wikitext
text/x-wiki
= Teleop =
Teleoperation is the art of controlling a robot from a distance (prefix "tele-" comes from the Ancient Greek word tēle, which means "far off, at a distance, far away, far from").
== Whole Body Teleop ==
In whole-body teleop, the user controls a hardware replica of the robot directly. The real robot directly mimcis the pose of the replica robot.
* https://mobile-aloha.github.io/
* https://www.youtube.com/watch?v=PFw5hwNVhbA
* https://x.com/haoshu_fang/status/1707434624413306955
* https://x.com/aditya_oberai/status/1762637503495033171
== VR Teleop ==
In VR teleop the user controls a simulated version of the robot. The robot and VR headset are usually on the same LAN and communication is done via gRPC or WebRTC.
* https://github.com/Improbable-AI/VisionProTeleop
* https://github.com/fazildgr8/VR_communication_mujoco200
* https://github.com/pollen-robotics/reachy2021-unity-package
* https://freetale.medium.com/unity-grpc-in-2023-98b739cb115
* https://x.com/AndreTI/status/1780665435999924343
* https://holo-dex.github.io/
* https://what-is-proxy.com
== Controller Teleop ==
In controlelr teleop, the user uses joysticks, keyborads, and other controllers to control the robot. The buttons map to hardcoded behaviors on the robot.
* https://github.com/ros-teleop
* https://x.com/ShivinDass/status/1606156894271197184
* https://what-is-proxy.com
== Latency ==
* https://ennerf.github.io/2016/09/20/A-Practical-Look-at-Latency-in-Robotics-The-Importance-of-Metrics-and-Operating-Systems.html
* https://link.springer.com/article/10.1007/s10846-022-01749-3
* https://twitter.com/watneyrobotics/status/1769058250731999591
[[Category: Teleop]]
21925b264054636603453c256de0cb2c794e0b12
299
298
2024-04-24T23:55:31Z
141.156.214.27
0
/* Whole Body Teleop */
wikitext
text/x-wiki
= Teleop =
Teleoperation is the art of controlling a robot from a distance (prefix "tele-" comes from the Ancient Greek word tēle, which means "far off, at a distance, far away, far from").
== Whole Body Teleop ==
In whole-body teleop, the user controls a hardware replica of the robot directly. The real robot directly mimcis the pose of the replica robot.
* https://mobile-aloha.github.io/
* https://www.youtube.com/watch?v=PFw5hwNVhbA
* https://x.com/haoshu_fang/status/1707434624413306955
* https://x.com/aditya_oberai/status/1762637503495033171
* https://what-is-proxy.com
== VR Teleop ==
In VR teleop the user controls a simulated version of the robot. The robot and VR headset are usually on the same LAN and communication is done via gRPC or WebRTC.
* https://github.com/Improbable-AI/VisionProTeleop
* https://github.com/fazildgr8/VR_communication_mujoco200
* https://github.com/pollen-robotics/reachy2021-unity-package
* https://freetale.medium.com/unity-grpc-in-2023-98b739cb115
* https://x.com/AndreTI/status/1780665435999924343
* https://holo-dex.github.io/
* https://what-is-proxy.com
== Controller Teleop ==
In controlelr teleop, the user uses joysticks, keyborads, and other controllers to control the robot. The buttons map to hardcoded behaviors on the robot.
* https://github.com/ros-teleop
* https://x.com/ShivinDass/status/1606156894271197184
* https://what-is-proxy.com
== Latency ==
* https://ennerf.github.io/2016/09/20/A-Practical-Look-at-Latency-in-Robotics-The-Importance-of-Metrics-and-Operating-Systems.html
* https://link.springer.com/article/10.1007/s10846-022-01749-3
* https://twitter.com/watneyrobotics/status/1769058250731999591
[[Category: Teleop]]
704c9bbfe08816f9e961fec353379ee80e4ca303
300
299
2024-04-24T23:56:13Z
141.156.214.27
0
/* Teleop */
wikitext
text/x-wiki
= Teleop =
Teleoperation is the art of controlling a robot from a distance (prefix "tele-" comes from the Ancient Greek word tēle, which means "far off, at a distance, far away, far from").
Robots specifically designed to be teleoperated by a human are known as a Proxy.
== Whole Body Teleop ==
In whole-body teleop, the user controls a hardware replica of the robot directly. The real robot directly mimcis the pose of the replica robot.
* https://mobile-aloha.github.io/
* https://www.youtube.com/watch?v=PFw5hwNVhbA
* https://x.com/haoshu_fang/status/1707434624413306955
* https://x.com/aditya_oberai/status/1762637503495033171
* https://what-is-proxy.com
== VR Teleop ==
In VR teleop the user controls a simulated version of the robot. The robot and VR headset are usually on the same LAN and communication is done via gRPC or WebRTC.
* https://github.com/Improbable-AI/VisionProTeleop
* https://github.com/fazildgr8/VR_communication_mujoco200
* https://github.com/pollen-robotics/reachy2021-unity-package
* https://freetale.medium.com/unity-grpc-in-2023-98b739cb115
* https://x.com/AndreTI/status/1780665435999924343
* https://holo-dex.github.io/
* https://what-is-proxy.com
== Controller Teleop ==
In controlelr teleop, the user uses joysticks, keyborads, and other controllers to control the robot. The buttons map to hardcoded behaviors on the robot.
* https://github.com/ros-teleop
* https://x.com/ShivinDass/status/1606156894271197184
* https://what-is-proxy.com
== Latency ==
* https://ennerf.github.io/2016/09/20/A-Practical-Look-at-Latency-in-Robotics-The-Importance-of-Metrics-and-Operating-Systems.html
* https://link.springer.com/article/10.1007/s10846-022-01749-3
* https://twitter.com/watneyrobotics/status/1769058250731999591
[[Category: Teleop]]
1be905c192c941fd03d2b29384ce17def87ae8a2
Servo Design
0
61
263
2024-04-24T21:39:56Z
Ben
2
Created page with "This page contains information about how to build a good open-source servo. === Open Source Servos === * [https://github.com/unhuman-io/obot OBot] * [https://github.com/atop..."
wikitext
text/x-wiki
This page contains information about how to build a good open-source servo.
=== Open Source Servos ===
* [https://github.com/unhuman-io/obot OBot]
* [https://github.com/atopile/spin-servo-drive SPIN Servo Drive]
=== Controllers ===
* [https://github.com/napowderly/mcp2515 MCP2515 tranceiver circuit]
[[Category: Hardware]]
[[Category: Electronics]]
bd3059f6d3f0352883e1768ffff230f1990fbb61
Booster Robotics
0
62
265
2024-04-24T22:23:22Z
Modeless
7
Created page with "The only online presence of Booster Robotics is their YouTube channel with this video: https://www.youtube.com/watch?v=SIK2MFIIIXw"
wikitext
text/x-wiki
The only online presence of Booster Robotics is their YouTube channel with this video: https://www.youtube.com/watch?v=SIK2MFIIIXw
7799473ae99332bc55404443ec91ed662eadc8d8
281
265
2024-04-24T22:50:03Z
Modeless
7
wikitext
text/x-wiki
{{infobox company
| name = Booster Robotics
| website_link = https://www.youtube.com/watch?v=SIK2MFIIIXw
| robots = [[BR002]]
}}
The only online presence of Booster Robotics is their YouTube channel with this video: https://www.youtube.com/watch?v=SIK2MFIIIXw
90eb4aaed57594e67fe652cde28ccda528cb4caa
282
281
2024-04-24T22:50:45Z
Modeless
7
wikitext
text/x-wiki
The only online presence of Booster Robotics is their YouTube channel with this video: https://www.youtube.com/watch?v=SIK2MFIIIXw
{{infobox company
| name = Booster Robotics
| website_link = https://www.youtube.com/watch?v=SIK2MFIIIXw
| robots = [[BR002]]
}}
[[Category:Companies]]
4d02a5c4bcdb1e4e48926ade7a34444de37bcce6
Talk:Main Page
1
63
266
2024-04-24T22:26:50Z
Modeless
7
Created page with "You should install a YouTube plugin so we can embed robot demo videos! --~~~~"
wikitext
text/x-wiki
You should install a YouTube plugin so we can embed robot demo videos! --[[User:Modeless|Modeless]] ([[User talk:Modeless|talk]]) 22:26, 24 April 2024 (UTC)
bb3fbc881bc9f53c1da250cc8f16fd1c465d21dd
Learning algorithms
0
32
270
240
2024-04-24T22:34:26Z
104.7.66.79
0
/* Training algorithms */
wikitext
text/x-wiki
= Learning algorithms =
Learning algorithms allow to train humanoids to perform different skills such as manipulation or locomotion. Below is an overview of general approaches to training machine learning models for humanoid robots.
== Physics engines ==
Physics engines are software libraries designed to simulate physical systems in a virtual environment. They are crucial in a variety of fields such as video games, animation, robotics, and engineering simulations. These engines handle the mathematics involved in simulating physical processes like motion, collisions, and fluid dynamics.
===PhysX===
PhysX is a physics engine by NVIDIA used primarily for video games and real-time simulations. It supports rigid body dynamics, cloth simulation, and particle effects, enhancing realism and interactivity in 3D environments.
===Mujoco===
MuJoCo (Multi-Joint dynamics with Contact) is a physics engine designed for research in robotics and biomechanics. It's known for its speed, accuracy, and ease of use, making it popular for simulating complex systems with robotics and articulated structures.
==Simulators==
===[[Isaac Sim]]===
===[[VSim]]===
==Training frameworks==
Popular training frameworks
===Isaac Gym===
Isaac Gym is NVIDIA's robotics simulation tool, part of the Isaac SDK. It leverages GPU acceleration to enable the simulation of thousands of robot bodies simultaneously, making it highly efficient for training machine learning models in robotics. It's designed to streamline robotics applications, focusing on reinforcement learning in a virtual environment.
===Gymnasium===
Gymnasium is an open-source toolkit for developing and comparing reinforcement learning algorithms. Originally developed by OpenAI as "Gym," it provides a standardized set of environments (like Atari games, robotic simulations, etc.) to test and benchmark AI algorithms. It's widely used in the AI research community to foster innovation and replication in RL studies.
== Training methods ==
===[[Imitation learning]]===
Imitation Learning is a technique where models learn to perform tasks by mimicking expert behaviors. This approach is often used when defining explicit reward functions is challenging. It accelerates learning by using pre-collected datasets of expert demonstrations, reducing the need for trial-and-error in initial learning phases.
===[[Reinforcement Learning]]===
Reinforcement Learning involves agents learning to make decisions by interacting with an environment to maximize cumulative rewards. It's foundational in fields where sequential decision-making is crucial, like gaming, autonomous vehicles, and robotics. RL uses methods like Q-learning and policy gradient to iteratively improve agent performance based on feedback from the environment.
[[Category: Software]]
6cb8b923d63a866774f72955ecde6ec64a97cde1
277
270
2024-04-24T22:42:33Z
104.7.66.79
0
/* Training frameworks */
wikitext
text/x-wiki
= Learning algorithms =
Learning algorithms allow to train humanoids to perform different skills such as manipulation or locomotion. Below is an overview of general approaches to training machine learning models for humanoid robots.
== Physics engines ==
Physics engines are software libraries designed to simulate physical systems in a virtual environment. They are crucial in a variety of fields such as video games, animation, robotics, and engineering simulations. These engines handle the mathematics involved in simulating physical processes like motion, collisions, and fluid dynamics.
===PhysX===
PhysX is a physics engine by NVIDIA used primarily for video games and real-time simulations. It supports rigid body dynamics, cloth simulation, and particle effects, enhancing realism and interactivity in 3D environments.
===Mujoco===
MuJoCo (Multi-Joint dynamics with Contact) is a physics engine designed for research in robotics and biomechanics. It's known for its speed, accuracy, and ease of use, making it popular for simulating complex systems with robotics and articulated structures.
==Simulators==
===[[Isaac Sim]]===
===[[VSim]]===
==Training frameworks==
Popular training frameworks are listed here with example applications.
===[https://github.com/NVIDIA-Omniverse/IsaacGymEnvs Isaac Gym]===
Isaac Gym is NVIDIA's robotics simulation tool, part of the Isaac SDK. It leverages GPU acceleration to enable the simulation of thousands of robot bodies simultaneously, making it highly efficient for training machine learning models in robotics. It's designed to streamline robotics applications, focusing on reinforcement learning in a virtual environment.
===[https://gymnasium.farama.org/ Gymnasium]===
Gymnasium is an open-source toolkit for developing and comparing reinforcement learning algorithms. Originally developed by OpenAI as "Gym," it provides a standardized set of environments (like Atari games, robotic simulations, etc.) to test and benchmark AI algorithms. It's widely used in the AI research community to foster innovation and replication in RL studies.
===[[Applications]]===
Over the last decade several advancements have been made in learning locomotion and manipulation skills with simulations. See non-comprehensive list here.
== Training methods ==
===[[Imitation learning]]===
Imitation Learning is a technique where models learn to perform tasks by mimicking expert behaviors. This approach is often used when defining explicit reward functions is challenging. It accelerates learning by using pre-collected datasets of expert demonstrations, reducing the need for trial-and-error in initial learning phases.
===[[Reinforcement Learning]]===
Reinforcement Learning involves agents learning to make decisions by interacting with an environment to maximize cumulative rewards. It's foundational in fields where sequential decision-making is crucial, like gaming, autonomous vehicles, and robotics. RL uses methods like Q-learning and policy gradient to iteratively improve agent performance based on feedback from the environment.
[[Category: Software]]
62732a33b16beafae8589a4752c2dc52ebde5f7d
291
277
2024-04-24T23:07:37Z
104.7.66.79
0
/* Simulators */
wikitext
text/x-wiki
= Learning algorithms =
Learning algorithms allow to train humanoids to perform different skills such as manipulation or locomotion. Below is an overview of general approaches to training machine learning models for humanoid robots.
== Physics engines ==
Physics engines are software libraries designed to simulate physical systems in a virtual environment. They are crucial in a variety of fields such as video games, animation, robotics, and engineering simulations. These engines handle the mathematics involved in simulating physical processes like motion, collisions, and fluid dynamics.
===PhysX===
PhysX is a physics engine by NVIDIA used primarily for video games and real-time simulations. It supports rigid body dynamics, cloth simulation, and particle effects, enhancing realism and interactivity in 3D environments.
===Mujoco===
MuJoCo (Multi-Joint dynamics with Contact) is a physics engine designed for research in robotics and biomechanics. It's known for its speed, accuracy, and ease of use, making it popular for simulating complex systems with robotics and articulated structures.
==Simulators==
===[[Isaac Sim]]===
==[[https://github.com/haosulab/ManiSkill ManiSkill]]==
===[[VSim]]===
==Training frameworks==
Popular training frameworks are listed here with example applications.
===[https://github.com/NVIDIA-Omniverse/IsaacGymEnvs Isaac Gym]===
Isaac Gym is NVIDIA's robotics simulation tool, part of the Isaac SDK. It leverages GPU acceleration to enable the simulation of thousands of robot bodies simultaneously, making it highly efficient for training machine learning models in robotics. It's designed to streamline robotics applications, focusing on reinforcement learning in a virtual environment.
===[https://gymnasium.farama.org/ Gymnasium]===
Gymnasium is an open-source toolkit for developing and comparing reinforcement learning algorithms. Originally developed by OpenAI as "Gym," it provides a standardized set of environments (like Atari games, robotic simulations, etc.) to test and benchmark AI algorithms. It's widely used in the AI research community to foster innovation and replication in RL studies.
===[[Applications]]===
Over the last decade several advancements have been made in learning locomotion and manipulation skills with simulations. See non-comprehensive list here.
== Training methods ==
===[[Imitation learning]]===
Imitation Learning is a technique where models learn to perform tasks by mimicking expert behaviors. This approach is often used when defining explicit reward functions is challenging. It accelerates learning by using pre-collected datasets of expert demonstrations, reducing the need for trial-and-error in initial learning phases.
===[[Reinforcement Learning]]===
Reinforcement Learning involves agents learning to make decisions by interacting with an environment to maximize cumulative rewards. It's foundational in fields where sequential decision-making is crucial, like gaming, autonomous vehicles, and robotics. RL uses methods like Q-learning and policy gradient to iteratively improve agent performance based on feedback from the environment.
[[Category: Software]]
fc31af82c9aaf95de5da92c905ac9d26bb36274b
292
291
2024-04-24T23:07:49Z
104.7.66.79
0
/* https://github.com/haosulab/ManiSkill ManiSkill */
wikitext
text/x-wiki
= Learning algorithms =
Learning algorithms allow to train humanoids to perform different skills such as manipulation or locomotion. Below is an overview of general approaches to training machine learning models for humanoid robots.
== Physics engines ==
Physics engines are software libraries designed to simulate physical systems in a virtual environment. They are crucial in a variety of fields such as video games, animation, robotics, and engineering simulations. These engines handle the mathematics involved in simulating physical processes like motion, collisions, and fluid dynamics.
===PhysX===
PhysX is a physics engine by NVIDIA used primarily for video games and real-time simulations. It supports rigid body dynamics, cloth simulation, and particle effects, enhancing realism and interactivity in 3D environments.
===Mujoco===
MuJoCo (Multi-Joint dynamics with Contact) is a physics engine designed for research in robotics and biomechanics. It's known for its speed, accuracy, and ease of use, making it popular for simulating complex systems with robotics and articulated structures.
==Simulators==
===[[Isaac Sim]]===
===[https://github.com/haosulab/ManiSkill ManiSkill]===
===[[VSim]]===
==Training frameworks==
Popular training frameworks are listed here with example applications.
===[https://github.com/NVIDIA-Omniverse/IsaacGymEnvs Isaac Gym]===
Isaac Gym is NVIDIA's robotics simulation tool, part of the Isaac SDK. It leverages GPU acceleration to enable the simulation of thousands of robot bodies simultaneously, making it highly efficient for training machine learning models in robotics. It's designed to streamline robotics applications, focusing on reinforcement learning in a virtual environment.
===[https://gymnasium.farama.org/ Gymnasium]===
Gymnasium is an open-source toolkit for developing and comparing reinforcement learning algorithms. Originally developed by OpenAI as "Gym," it provides a standardized set of environments (like Atari games, robotic simulations, etc.) to test and benchmark AI algorithms. It's widely used in the AI research community to foster innovation and replication in RL studies.
===[[Applications]]===
Over the last decade several advancements have been made in learning locomotion and manipulation skills with simulations. See non-comprehensive list here.
== Training methods ==
===[[Imitation learning]]===
Imitation Learning is a technique where models learn to perform tasks by mimicking expert behaviors. This approach is often used when defining explicit reward functions is challenging. It accelerates learning by using pre-collected datasets of expert demonstrations, reducing the need for trial-and-error in initial learning phases.
===[[Reinforcement Learning]]===
Reinforcement Learning involves agents learning to make decisions by interacting with an environment to maximize cumulative rewards. It's foundational in fields where sequential decision-making is crucial, like gaming, autonomous vehicles, and robotics. RL uses methods like Q-learning and policy gradient to iteratively improve agent performance based on feedback from the environment.
[[Category: Software]]
f53fca0a85afac92a15901c08133f171ebce011e
294
292
2024-04-24T23:11:40Z
104.7.66.79
0
/* Learning algorithms */
wikitext
text/x-wiki
= Learning algorithms =
Learning algorithms allow to train humanoids to perform different skills such as manipulation or locomotion. Below is an overview of general approaches to training machine learning models for humanoid robots with possible example of applications.
== Physics engines ==
Physics engines are software libraries designed to simulate physical systems in a virtual environment. They are crucial in a variety of fields such as video games, animation, robotics, and engineering simulations. These engines handle the mathematics involved in simulating physical processes like motion, collisions, and fluid dynamics.
===PhysX===
PhysX is a physics engine by NVIDIA used primarily for video games and real-time simulations. It supports rigid body dynamics, cloth simulation, and particle effects, enhancing realism and interactivity in 3D environments.
===Mujoco===
MuJoCo (Multi-Joint dynamics with Contact) is a physics engine designed for research in robotics and biomechanics. It's known for its speed, accuracy, and ease of use, making it popular for simulating complex systems with robotics and articulated structures.
==Simulators==
===[[Isaac Sim]]===
===[https://github.com/haosulab/ManiSkill ManiSkill]===
===[[VSim]]===
==Training frameworks==
Popular training frameworks are listed here with example applications.
===[https://github.com/NVIDIA-Omniverse/IsaacGymEnvs Isaac Gym]===
Isaac Gym is NVIDIA's robotics simulation tool, part of the Isaac SDK. It leverages GPU acceleration to enable the simulation of thousands of robot bodies simultaneously, making it highly efficient for training machine learning models in robotics. It's designed to streamline robotics applications, focusing on reinforcement learning in a virtual environment.
===[https://gymnasium.farama.org/ Gymnasium]===
Gymnasium is an open-source toolkit for developing and comparing reinforcement learning algorithms. Originally developed by OpenAI as "Gym," it provides a standardized set of environments (like Atari games, robotic simulations, etc.) to test and benchmark AI algorithms. It's widely used in the AI research community to foster innovation and replication in RL studies.
===[[Applications]]===
Over the last decade several advancements have been made in learning locomotion and manipulation skills with simulations. See non-comprehensive list here.
== Training methods ==
===[[Imitation learning]]===
Imitation Learning is a technique where models learn to perform tasks by mimicking expert behaviors. This approach is often used when defining explicit reward functions is challenging. It accelerates learning by using pre-collected datasets of expert demonstrations, reducing the need for trial-and-error in initial learning phases.
===[[Reinforcement Learning]]===
Reinforcement Learning involves agents learning to make decisions by interacting with an environment to maximize cumulative rewards. It's foundational in fields where sequential decision-making is crucial, like gaming, autonomous vehicles, and robotics. RL uses methods like Q-learning and policy gradient to iteratively improve agent performance based on feedback from the environment.
[[Category: Software]]
8ca58594e364c6b3fd5309125a3d5bf65bc15f2b
295
294
2024-04-24T23:12:19Z
104.7.66.79
0
/* Learning algorithms */
wikitext
text/x-wiki
= Learning algorithms =
Learning algorithms allow to train humanoids to perform different skills such as manipulation or locomotion. Below is an overview of general approaches to training machine learning models for humanoid robots with example [[applications]].
== Physics engines ==
Physics engines are software libraries designed to simulate physical systems in a virtual environment. They are crucial in a variety of fields such as video games, animation, robotics, and engineering simulations. These engines handle the mathematics involved in simulating physical processes like motion, collisions, and fluid dynamics.
===PhysX===
PhysX is a physics engine by NVIDIA used primarily for video games and real-time simulations. It supports rigid body dynamics, cloth simulation, and particle effects, enhancing realism and interactivity in 3D environments.
===Mujoco===
MuJoCo (Multi-Joint dynamics with Contact) is a physics engine designed for research in robotics and biomechanics. It's known for its speed, accuracy, and ease of use, making it popular for simulating complex systems with robotics and articulated structures.
==Simulators==
===[[Isaac Sim]]===
===[https://github.com/haosulab/ManiSkill ManiSkill]===
===[[VSim]]===
==Training frameworks==
Popular training frameworks are listed here with example applications.
===[https://github.com/NVIDIA-Omniverse/IsaacGymEnvs Isaac Gym]===
Isaac Gym is NVIDIA's robotics simulation tool, part of the Isaac SDK. It leverages GPU acceleration to enable the simulation of thousands of robot bodies simultaneously, making it highly efficient for training machine learning models in robotics. It's designed to streamline robotics applications, focusing on reinforcement learning in a virtual environment.
===[https://gymnasium.farama.org/ Gymnasium]===
Gymnasium is an open-source toolkit for developing and comparing reinforcement learning algorithms. Originally developed by OpenAI as "Gym," it provides a standardized set of environments (like Atari games, robotic simulations, etc.) to test and benchmark AI algorithms. It's widely used in the AI research community to foster innovation and replication in RL studies.
===[[Applications]]===
Over the last decade several advancements have been made in learning locomotion and manipulation skills with simulations. See non-comprehensive list here.
== Training methods ==
===[[Imitation learning]]===
Imitation Learning is a technique where models learn to perform tasks by mimicking expert behaviors. This approach is often used when defining explicit reward functions is challenging. It accelerates learning by using pre-collected datasets of expert demonstrations, reducing the need for trial-and-error in initial learning phases.
===[[Reinforcement Learning]]===
Reinforcement Learning involves agents learning to make decisions by interacting with an environment to maximize cumulative rewards. It's foundational in fields where sequential decision-making is crucial, like gaming, autonomous vehicles, and robotics. RL uses methods like Q-learning and policy gradient to iteratively improve agent performance based on feedback from the environment.
[[Category: Software]]
9aacff7ab539b9555c4e3e55eda4670b9cfb7d6f
AGIBot
0
64
271
2024-04-24T22:35:45Z
Modeless
7
Created page with "Agibot is a Chinese startup formed in 2022. They have a humanoid called RAISE-A1. Their CEO gave a detailed presentation about it here: https://www.youtube.com/watch?v=ZwjxbDV..."
wikitext
text/x-wiki
Agibot is a Chinese startup formed in 2022. They have a humanoid called RAISE-A1. Their CEO gave a detailed presentation about it here: https://www.youtube.com/watch?v=ZwjxbDVbGpU&t=1471s
30e6719035c93a254d3bf112eb0f0343520274d8
280
271
2024-04-24T22:48:42Z
Modeless
7
wikitext
text/x-wiki
Agibot is a Chinese startup formed in 2022. The name stands for "Artificial General Intelligence Bot". They have a humanoid called RAISE-A1. Their CEO gave a detailed presentation about it here: https://www.youtube.com/watch?v=ZwjxbDVbGpU&t=1471s
b896c8fa1667649b273aeecba731b9bb0c7f5fde
284
280
2024-04-24T22:52:34Z
Modeless
7
wikitext
text/x-wiki
The "Agi" in their name stands for "Artificial General Intelligence". Their CEO gave a detailed presentation here: https://www.youtube.com/watch?v=ZwjxbDVbGpU&t=1471s
{{infobox company
| name = Agibot
| website_link = https://www.agibot.com/
| robots = [[RAISE-A1]]
}}
[[Category:Companies]]
52fa639517a52b0564c2195b853ebd7ca21739fb
285
284
2024-04-24T22:53:10Z
Modeless
7
wikitext
text/x-wiki
The "Agi" in their name stands for "Artificial General Intelligence". Their CEO gave a detailed presentation here: https://www.youtube.com/watch?v=ZwjxbDVbGpU&t=1471s
{{infobox company
| name = Agibot
| country = China
| website_link = https://www.agibot.com/
| robots = [[RAISE-A1]]
}}
[[Category:Companies]]
73984c3a371c72de33880735a346e5eb1c3be56b
Reinforcement Learning
0
34
272
131
2024-04-24T22:36:25Z
104.7.66.79
0
/* PPO */
wikitext
text/x-wiki
==Training algorithms==
===A2C===
===[https://en.wikipedia.org/wiki/Proximal_policy_optimization PPO]===
e18348cdc5f5f670cfeff48fab551f021f7b3423
273
272
2024-04-24T22:36:51Z
104.7.66.79
0
/* A2C */
wikitext
text/x-wiki
==Training algorithms==
===[https://en.wikipedia.org/wiki/Advantage_Actor_Critic A2C]===
===[https://en.wikipedia.org/wiki/Proximal_policy_optimization PPO]===
b2a6996a393c689b8ed0d8011333bd8d9c5e9f00
LimX Dynamics
0
65
275
2024-04-24T22:37:59Z
Modeless
7
Created page with "LimX Dynamics was founded in 2022 in China. Their first product is a dog robot with wheels for feet, but their second product is a humanoid called "CL-1". Here is a video show..."
wikitext
text/x-wiki
LimX Dynamics was founded in 2022 in China. Their first product is a dog robot with wheels for feet, but their second product is a humanoid called "CL-1". Here is a video showcasing their teleoperation setup to collect training data. https://www.youtube.com/watch?v=2dmjzMv-y-M
ece378ea8022c228973ffb1bc02d3a5c0121654f
286
275
2024-04-24T22:54:41Z
Modeless
7
wikitext
text/x-wiki
LimX Dynamics was founded in 2022 in China. Their first product is a dog robot with wheels for feet, but their second product is a humanoid called [[CL-1]]. Here is a video showcasing their teleoperation setup to collect training data. https://www.youtube.com/watch?v=2dmjzMv-y-M
{{infobox company
| name = LimX Dynamics
| country = China
| website_link = https://www.limxdynamics.com/en
| robots = [[CL-1]]
}}
[[Category:Companies]]
d2703b0dfbcda8e0d84361b129edefcacbd2aaf4
Applications
0
66
283
2024-04-24T22:51:25Z
104.7.66.79
0
Created page with "=Applications List= A non-comprehensive list of training frameworks is listed below. ===[https://github.com/leggedrobotics/legged_gym Legged Gym]=== Isaac Gym Environments fo..."
wikitext
text/x-wiki
=Applications List=
A non-comprehensive list of training frameworks is listed below.
===[https://github.com/leggedrobotics/legged_gym Legged Gym]===
Isaac Gym Environments for Legged Robots.
===[https://github.com/roboterax/humanoid-gym Humanoid Gym]===
Training setup for walking with [[Xbot-L]].
===[https://github.com/kscalelabs/sim KScale Sim]===
Training setup for getting up and walking with [[Stompy]].
dcc27dbe09127c2a7c87798dfca8c958362d4b50
287
283
2024-04-24T22:57:36Z
104.7.66.79
0
/* Applications List */
wikitext
text/x-wiki
=Applications List=
A non-comprehensive list of training frameworks is listed below.
===[https://github.com/leggedrobotics/legged_gym Legged Gym]===
Isaac Gym Environments for Legged Robots.
===[https://github.com/Alescontrela/AMP_for_hardware AMP for hardware]===
Adversarial Motion Priors Make Good Substitutes for Complex Reward Functions
==[https://github.com/ZhengyiLuo/PHC PHC]===
Official Implementation of the ICCV 2023 paper: Perpetual Humanoid Control for Real-time Simulated Avatars.
===[https://github.com/roboterax/humanoid-gym Humanoid Gym]===
Training setup for walking with [[Xbot-L]].
===[https://github.com/kscalelabs/sim KScale Sim]===
Training setup for getting up and walking with [[Stompy]].
0af6e8829cc616c85c3bad05d9939c1d586b70b1
288
287
2024-04-24T22:58:00Z
104.7.66.79
0
wikitext
text/x-wiki
=Applications List=
A non-comprehensive list of training frameworks is listed below.
===[https://github.com/leggedrobotics/legged_gym Legged Gym]===
Isaac Gym Environments for Legged Robots.
===[https://github.com/Alescontrela/AMP_for_hardware AMP for hardware]===
Adversarial Motion Priors Make Good Substitutes for Complex Reward Functions
===[https://github.com/ZhengyiLuo/PHC PHC]===
Official Implementation of the ICCV 2023 paper: Perpetual Humanoid Control for Real-time Simulated Avatars.
===[https://github.com/roboterax/humanoid-gym Humanoid Gym]===
Training setup for walking with [[Xbot-L]].
===[https://github.com/kscalelabs/sim KScale Sim]===
Training setup for getting up and walking with [[Stompy]].
78eb6b45fa1c3aee693dcebe08f52972e66a293f
289
288
2024-04-24T22:59:15Z
104.7.66.79
0
wikitext
text/x-wiki
=Applications List=
A non-comprehensive list of training frameworks is listed below.
===[https://github.com/leggedrobotics/legged_gym Legged Gym]===
Isaac Gym Environments for Legged Robots.
===[https://github.com/chengxuxin/extreme-parkour Extreme Parkour]===
Extreme Parkour with AMP Legged Robots.
===[https://github.com/Alescontrela/AMP_for_hardware AMP for hardware]===
Adversarial Motion Priors Make Good Substitutes for Complex Reward Functions
===[https://github.com/ZhengyiLuo/PHC PHC]===
Official Implementation of the ICCV 2023 paper: Perpetual Humanoid Control for Real-time Simulated Avatars.
===[https://github.com/roboterax/humanoid-gym Humanoid Gym]===
Training setup for walking with [[Xbot-L]].
===[https://github.com/kscalelabs/sim KScale Sim]===
Training setup for getting up and walking with [[Stompy]].
0ed84ddf4368f699b5074eb72406cf103c4d9f10
File:Logo Anthrobotics Full.jpeg
6
67
290
2024-04-24T23:01:35Z
AnthroDev
8
Logo for Anthrobotics
wikitext
text/x-wiki
== Summary ==
Logo for Anthrobotics
cf09f4204755c59510c0154435653be09feb1240
Anthrobotics
0
68
293
2024-04-24T23:10:01Z
AnthroDev
8
Created page with "[[File:Logo Anthrobotics Full.jpeg|right|400px|thumb]] Anthrobotics is an Alberta-based startup company developing open-source humanoid robots and deployable autonomous syste..."
wikitext
text/x-wiki
[[File:Logo Anthrobotics Full.jpeg|right|400px|thumb]]
Anthrobotics is an Alberta-based startup company developing open-source humanoid robots and deployable autonomous systems. Their flagship project in development is a digitigrade humanoid robot called the [[Anthro]] (short for Anthropomorphic Robot).
{{infobox company
| name = Anthrobotics
| country = Canada
| website_link = https://anthrobotics.ca/
| robots = [[Anthro]]
}}
Anthrobotics was founded in 2016 as a continuation of "Project '87", a 2014 collaborative community project centered around building real-life walking replicas of the animatronic characters from the Five Nights at Freddy's video game series. As the animatronics required a full-size humanoid robot to perform as mobile units rather than static displays, the scope of the project was expanded to develop a general purpose, highly customizable humanoid.
Development of the Anthro is ongoing, with the end goal of providing users and developers with a modular, highly customizable humanoid that can take on a diverse range of anthropomorphic appearances. The Anthro is intended to be used in any general purpose application, or in niche roles.
[[Category:Companies]]
e4385b451e770c4e61398a6f04bb58cafe014f71
User:Modeless
2
69
296
2024-04-24T23:21:14Z
Modeless
7
Created page with "Hi I'm James Darpinian. Check my website: https://james.darpinian.com/ Follow me on X: https://x.com/modeless"
wikitext
text/x-wiki
Hi I'm James Darpinian. Check my website: https://james.darpinian.com/
Follow me on X: https://x.com/modeless
98560545d53c8d0d13ce62da2291aa7a74680b16
Main Page
0
1
303
302
2024-04-24T23:59:02Z
141.156.214.27
0
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|
|-
| [[Proxy]]
|}
067611ad6c239f6d7e5440d0f493d3d9c3f0a048
304
303
2024-04-24T23:59:55Z
141.156.214.27
0
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|
|-
| [[Proxy]]
|
|}
b45fb8e5014ea9e9017a9e3959aa1963b7921ab7
343
304
2024-04-25T19:41:14Z
24.130.242.94
0
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[Proxy]]
|
|}
f07457b16e2bdb603295df117896c45ce7e5aeb7
346
343
2024-04-25T21:14:23Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[Proxy]]
|
|}
97be954cc43be1b374add9b69f5021544cc4a81b
Proxy
0
70
305
2024-04-25T00:04:25Z
141.156.214.27
0
Created page with "Proxy refers to a category of robots specifically designed for real-time, remote operation by humans. Unlike autonomous robots that rely on pre-programmed instructions or arti..."
wikitext
text/x-wiki
Proxy refers to a category of robots specifically designed for real-time, remote operation by humans. Unlike autonomous robots that rely on pre-programmed instructions or artificial intelligence for decision-making, Proxy robots act as physical extensions of their human operators, mimicking their movements and actions with minimal latency. This technology allows humans to experience and interact with the physical world from afar, opening up new possibilities for remote work, exploration, and accessibility. Proxy robots utilize various methods for control, including virtual reality headsets and haptic feedback systems, providing operators with an immersive and intuitive experience. While Proxy technology is still in its early stages, several existing robotics firms are partnering with Proxy developers to incorporate this human-centric approach into their humanoid robots. To learn more about Proxy technology and potential collaborations, contact contact@what-is-proxy.com.
f22ab625a74201c818b38a983d567e81f036b394
306
305
2024-04-25T00:07:00Z
141.156.214.27
0
Proxy refers to a category of robots specifically designed for real-time, remote operation by humans.
wikitext
text/x-wiki
== Proxy refers to a category of robots specifically designed for real-time, remote operation by humans. ==
Unlike autonomous robots that rely on pre-programmed instructions or artificial intelligence for decision-making, Proxy robots act as physical extensions of their human operators, mimicking their movements and actions with minimal latency. This technology allows humans to experience and interact with the physical world from afar, opening up new possibilities for remote work, exploration, and accessibility. Proxy robots utilize various methods for control, including virtual reality headsets and haptic feedback systems, providing operators with an immersive and intuitive experience. While Proxy technology is still in its early stages, existing robotics firms are partnering with Proxy to incorporate their human-centric approach into their humanoid robots. To learn more about Proxy technology and inquire about potential collaborations, their website provides contact@what-is-proxy.com.
c43f4b57334c2328dd8b50412c44dfa67eeaf853
307
306
2024-04-25T00:08:19Z
141.156.214.27
0
wikitext
text/x-wiki
== Proxy refers to a category of robots specifically designed for real-time, remote operation by humans. ==
Unlike autonomous robots that rely on pre-programmed instructions or artificial intelligence for decision-making, Proxy robots act as physical extensions of their human operators, mimicking their movements and actions with minimal latency. This technology allows humans to experience and interact with the physical world from afar, opening up new possibilities for remote work, exploration, and accessibility. Proxy robots utilize various methods for control, including virtual reality headsets and haptic feedback systems, providing operators with an immersive and intuitive experience. While Proxy technology is still in its early stages, existing robotics firms are partnering with Proxy to incorporate their human-centric approach into their humanoid robots. To learn more about Proxy technology and inquire about potential collaborations, email contact@what-is-proxy.com.
2c6fbccb92c50f8cfcc2773c6c0a5daed8a304f4
308
307
2024-04-25T00:14:23Z
141.156.214.27
0
/* Proxy refers to a category of robots specifically designed for real-time, remote operation by humans. */
wikitext
text/x-wiki
== Proxy refers to a category of robots specifically designed for real-time, remote operation by humans. ==
Unlike autonomous robots that rely on pre-programmed instructions or artificial intelligence for decision-making, Proxy robots act as physical extensions of their human operators, mimicking their movements and actions with minimal latency. This technology allows humans to experience and interact with the physical world from afar, opening up new possibilities for remote work, exploration, and accessibility. Proxy robots utilize various methods for control, including virtual reality headsets and haptic feedback systems, providing operators with an immersive and intuitive experience. While Proxy technology is still in its early stages, existing robotics firms are partnering with Proxy to incorporate their human-centric approach into their humanoid robots.
Comments from the team:
* To learn more about Proxy technology, visit https://https://what-is-proxy.com
* To inquire about potential collaborations, email '''contact@what-is-proxy.com'''.
18e4e22a480d31d9e8f9ca680f025a391bae2318
309
308
2024-04-25T00:24:47Z
141.156.214.27
0
wikitext
text/x-wiki
== Proxy refers to a category of robots specifically designed for real-time, remote operation by humans. ==
Unlike autonomous robots that rely on pre-programmed instructions or artificial intelligence for decision-making, Proxy robots act as physical extensions of their human operators, mimicking their movements and actions with minimal latency. This technology allows humans to experience and interact with the physical world from afar, opening up new possibilities for remote work, exploration, and accessibility. Proxy robots utilize various methods for control, including virtual reality headsets and haptic feedback systems, providing operators with an immersive and intuitive experience. While Proxy technology is still in its early stages, existing robotics firms are partnering with Proxy to incorporate their human-centric approach into their humanoid robots.
Comments from the team:
* To learn more about Proxy, visit https://https://what-is-proxy.com
6c66be3a1327ee40c3b410edac3b2b0f84e58390
310
309
2024-04-25T00:25:36Z
141.156.214.27
0
wikitext
text/x-wiki
== Proxy refers to a category of robots specifically designed for real-time, remote operation by humans. ==
Unlike autonomous robots that rely on pre-programmed instructions or artificial intelligence for decision-making, Proxy robots act as physical extensions of their human operators, mimicking their movements and actions with minimal latency. This technology allows humans to experience and interact with the physical world from afar, opening up new possibilities for remote work, exploration, and accessibility. Proxy robots utilize various methods for control, including virtual reality headsets and haptic feedback systems, providing operators with an immersive and intuitive experience. While Proxy technology is still in its early stages, existing robotics firms are partnering with Proxy to incorporate their human-centric approach into their humanoid robots.
Comments from the team:
* To learn more about Proxy, visit https://what-is-proxy.com
385171bae11ba5b6ad65d13b9025979a1e028bad
338
310
2024-04-25T17:28:53Z
Mrroboto
5
wikitext
text/x-wiki
== Proxy refers to a category of robots specifically designed for real-time, remote operation by humans. ==
Unlike autonomous robots that rely on pre-programmed instructions or artificial intelligence for decision-making, Proxy robots act as physical extensions of their human operators, mimicking their movements and actions with minimal latency. This technology allows humans to experience and interact with the physical world from afar, opening up new possibilities for remote work, exploration, and accessibility. Proxy robots utilize various methods for control, including virtual reality headsets and haptic feedback systems, providing operators with an immersive and intuitive experience. While Proxy technology is still in its early stages, existing robotics firms are partnering with Proxy to incorporate their human-centric approach into their humanoid robots.
Comments from the team:
* To learn more about Proxy, visit https://what-is-proxy.com
[[Category: Teleop]]
5cdfc77741d3dcb88878963ae49980442c3f45bc
339
338
2024-04-25T17:29:17Z
Mrroboto
5
wikitext
text/x-wiki
=== About ===
Proxy refers to a category of robots specifically designed for real-time, remote operation by humans.
=== Description ===
Unlike autonomous robots that rely on pre-programmed instructions or artificial intelligence for decision-making, Proxy robots act as physical extensions of their human operators, mimicking their movements and actions with minimal latency. This technology allows humans to experience and interact with the physical world from afar, opening up new possibilities for remote work, exploration, and accessibility. Proxy robots utilize various methods for control, including virtual reality headsets and haptic feedback systems, providing operators with an immersive and intuitive experience. While Proxy technology is still in its early stages, existing robotics firms are partnering with Proxy to incorporate their human-centric approach into their humanoid robots.
Comments from the team:
* To learn more about Proxy, visit https://what-is-proxy.com
[[Category: Teleop]]
6c987a4ffbd5a3d39a40059ab1c95bb0018318c7
Walker X
0
71
311
2024-04-25T03:58:32Z
User2024
6
Created page with "Walker X is a humanoid robot from [[UBTech]]. {{infobox robot | name = Walker X | organization = [[UBTech]] | height = 130 cm | weight = 63 kg | single_hand_payload = 1.5 | t..."
wikitext
text/x-wiki
Walker X is a humanoid robot from [[UBTech]].
{{infobox robot
| name = Walker X
| organization = [[UBTech]]
| height = 130 cm
| weight = 63 kg
| single_hand_payload = 1.5
| two_hand_payload = 3
| cost = USD 960,000
}}
[[Category:Robots]]
cec319bba5b1aaf629eb63538c59ab9723e34768
313
311
2024-04-25T04:00:52Z
User2024
6
wikitext
text/x-wiki
Walker X is a humanoid robot from [[UBTech]].
{{infobox robot
| name = Walker X
| organization = [[UBTech]]
| height = 130 cm
| weight = 63 kg
| single_hand_payload = 1.5
| two_hand_payload = 3
| cost = USD 960,000
| video_link = https://www.youtube.com/watch?v=4ZL3LgdKNbw
}}
[[Category:Robots]]
22663b29dd7050cade911ac50b93b6913669f849
UBTech
0
72
312
2024-04-25T04:00:17Z
User2024
6
Created page with "UBTech is building a humanoid robot called [[Walker X]]. {{infobox company | name = UBTech | country = China | website_link = https://www.ubtrobot.com/ | robots = [[Walker X]..."
wikitext
text/x-wiki
UBTech is building a humanoid robot called [[Walker X]].
{{infobox company
| name = UBTech
| country = China
| website_link = https://www.ubtrobot.com/
| robots = [[Walker X]]
}}
[[Category:Companies]]
10f13a655e2e68a1b8f2dc0baf64230aacc592b6
315
312
2024-04-25T04:05:28Z
User2024
6
wikitext
text/x-wiki
UBTech is building a humanoid robot called [[Walker X]], [[Panda Robot]].
{{infobox company
| name = UBTech
| country = China
| website_link = https://www.ubtrobot.com/
| robots = [[Walker X]], [[Panda Robot]]
}}
[[Category:Companies]]
6d9565187459529a208f31d5732d5782d50b446c
316
315
2024-04-25T04:06:04Z
User2024
6
wikitext
text/x-wiki
UBTech is building humanoid robots called [[Walker X]], [[Panda Robot]].
{{infobox company
| name = UBTech
| country = China
| website_link = https://www.ubtrobot.com/
| robots = [[Walker X]], [[Panda Robot]]
}}
[[Category:Companies]]
249e261ca0940994da7ae03206bad04c5e052512
318
316
2024-04-25T04:07:53Z
User2024
6
wikitext
text/x-wiki
UBTech is building humanoid robots called [[Walker X]], [[Panda Robot]] and [[Walker S]].
{{infobox company
| name = UBTech
| country = China
| website_link = https://www.ubtrobot.com/
| robots = [[Walker X]], [[Panda Robot]], [[Walker S]]
}}
[[Category:Companies]]
8728a8c97c3059a387e5997f681bc78192c44dbe
Panda Robot
0
73
314
2024-04-25T04:04:10Z
User2024
6
Created page with "Panda Robot is a humanoid robot from [[UBTech]]. {{infobox robot | name = Panda Robot | organization = [[UBTech]] | height = 130 cm | weight = 63 kg | single_hand_payload = 1..."
wikitext
text/x-wiki
Panda Robot is a humanoid robot from [[UBTech]].
{{infobox robot
| name = Panda Robot
| organization = [[UBTech]]
| height = 130 cm
| weight = 63 kg
| single_hand_payload = 1.5
| two_hand_payload = 3
| cost = USD 960,000
}}
[[Category:Robots]]
381f47cd90976c0fb6c30f49d2d8622695225a90
Walker S
0
74
317
2024-04-25T04:07:23Z
User2024
6
Created page with "Walker S is a humanoid robot from [[UBTech]]. {{infobox robot | name = Walker S | organization = [[UBTech]] | height = | weight = | single_hand_payload = | two_hand_payload..."
wikitext
text/x-wiki
Walker S is a humanoid robot from [[UBTech]].
{{infobox robot
| name = Walker S
| organization = [[UBTech]]
| height =
| weight =
| single_hand_payload =
| two_hand_payload =
| cost =
}}
[[Category:Robots]]
4155935e48cc89cc22aa75ae2be305d68fa0a504
Wukong-IV
0
75
319
2024-04-25T05:11:35Z
User2024
6
Created page with "Wukong-IV is a humanoid robot from [[Deep Robotics]]. {{infobox robot | name = Wukong-IV | organization = [[Deep Robotics]] | height = 140 cm | weight = 45 kg | single_hand_p..."
wikitext
text/x-wiki
Wukong-IV is a humanoid robot from [[Deep Robotics]].
{{infobox robot
| name = Wukong-IV
| organization = [[Deep Robotics]]
| height = 140 cm
| weight = 45 kg
| single_hand_payload
| two_hand_payload
| cost =
| video_link = https://www.youtube.com/watch?v=fbk4fYc6U14
}}
[[Category:Robots]]
2f89a47f8314e7f8d95d970a117bdbdfbf105bcc
Deep Robotics
0
76
320
2024-04-25T05:19:17Z
User2024
6
Created page with "Deep Robotics is building a humanoid robot called [[Wukong-IV]]. {{infobox company | name = Deep Robotics | country = China | website_link = https://www.deeprobotics.cn/en |..."
wikitext
text/x-wiki
Deep Robotics is building a humanoid robot called [[Wukong-IV]].
{{infobox company
| name = Deep Robotics
| country = China
| website_link = https://www.deeprobotics.cn/en
| robots = [[Wukong-IV]]
}}
[[Category:Companies]]
4e74e6b2665bebe1006fcf7077e22f4c2be72a9c
XR4
0
77
321
2024-04-25T05:44:52Z
User2024
6
Created page with "XR4 is a humanoid robot from [[DATAA Robotics]]. {{infobox robot | name = XR4 | organization = [[DATAA Robotics]] | height = | weight = | single_hand_payload | two_hand_pa..."
wikitext
text/x-wiki
XR4 is a humanoid robot from [[DATAA Robotics]].
{{infobox robot
| name = XR4
| organization = [[DATAA Robotics]]
| height =
| weight =
| single_hand_payload
| two_hand_payload
| cost =
}}
[[Category:Robots]]
3213a6fde6a5f433249dbb03515161ff6f1d3704
DATAA Robotics
0
78
322
2024-04-25T05:45:15Z
User2024
6
Created page with "DATAA Robotics is building a humanoid robot called [[XR4]]. {{infobox company | name = DATAA Robotics | country = China | website_link = https://www.dataarobotics.com/en | ro..."
wikitext
text/x-wiki
DATAA Robotics is building a humanoid robot called [[XR4]].
{{infobox company
| name = DATAA Robotics
| country = China
| website_link = https://www.dataarobotics.com/en
| robots = [[XR4]]
}}
[[Category:Companies]]
a416f7e2f439bfa56326e6fe290e0377d3b20502
ZEUS2Q
0
79
323
2024-04-25T05:56:25Z
User2024
6
Created page with "ZEUS2Q is a humanoid robot from [[System Technology Works]]. {{infobox robot | name = ZEUS2Q | organization = [[System Technology Works]] | height = 127 cm | weight = 13.61 k..."
wikitext
text/x-wiki
ZEUS2Q is a humanoid robot from [[System Technology Works]].
{{infobox robot
| name = ZEUS2Q
| organization = [[System Technology Works]]
| height = 127 cm
| weight = 13.61 kg
}}
[[Category:Robots]]
9e448ace10e1576a040a0c29823e6e2e9493150b
System Technology Works
0
80
324
2024-04-25T05:56:50Z
User2024
6
Created page with "System Technology Works is building a humanoid robot called [[ZEUS2Q]]. {{infobox company | name = System Technology Works | country = USA | website_link = https://www.system..."
wikitext
text/x-wiki
System Technology Works is building a humanoid robot called [[ZEUS2Q]].
{{infobox company
| name = System Technology Works
| country = USA
| website_link = https://www.systemtechnologyworks.com/
| robots = [[ZEUS2Q]]
}}
[[Category:Companies]]
8184bac9f7f1c1c640ceaabf86451936ec281279
Isaac Sim
0
18
325
56
2024-04-25T07:16:20Z
Stonet2000
9
wikitext
text/x-wiki
Isaac Sim is a simulator from NVIDIA connect with the Omniverse platform. The core physics engine underlying Isaac Sim is PhysX
=== Doing Simple Operations ===
'''Start Isaac Sim'''
* Open Omniverse Launcher
* Navigate to the Library
* Under “Apps” click “Isaac Sim”
* Click “Launch”
* There are multiple options for launching. Choose the normal one to show the GUI or headless if streaming.
* Choose <code>File > Open...</code> and select the <code>.usd</code> model corresponding to the robot you want to simulate.
'''Connecting streaming client'''
* Start Isaac Sim in Headless (Native) mode
* Open Omniverse Streaming Client
* Connect to the server
[[Category: Simulators]]
e1f470c78f865948ede2d6e32caa9a81526abfa4
342
325
2024-04-25T19:40:42Z
24.130.242.94
0
wikitext
text/x-wiki
Isaac Sim is a simulator from NVIDIA connect with the Omniverse platform. The core physics engine underlying Isaac Sim is PhysX
=== Doing Simple Operations ===
'''Start Isaac Sim'''
* Open Omniverse Launcher
* Navigate to the Library
* Under “Apps” click “Isaac Sim”
* Click “Launch”
** There are multiple options for launching. Choose the normal one to show the GUI or headless if streaming.
* Choose <code>File > Open...</code> and select the <code>.usd</code> model corresponding to the robot you want to simulate.
'''Connecting streaming client'''
* Start Isaac Sim in Headless (Native) mode
* Open Omniverse Streaming Client
* Connect to the server
[[Category: Simulators]]
c34d38192832cd8dbf7eb2b98a99deb9c5eae494
Learning algorithms
0
32
326
295
2024-04-25T07:23:39Z
Stonet2000
9
/* Physics engines */
wikitext
text/x-wiki
= Learning algorithms =
Learning algorithms allow to train humanoids to perform different skills such as manipulation or locomotion. Below is an overview of general approaches to training machine learning models for humanoid robots with example [[applications]].
== Physics engines ==
Physics engines are software libraries designed to simulate physical systems in a virtual environment. They are crucial in a variety of fields such as video games, animation, robotics, and engineering simulations. These engines handle the mathematics involved in simulating physical processes like motion, collisions, and fluid dynamics.
===PhysX===
PhysX is a physics engine by NVIDIA used primarily for video games and real-time simulations. It supports rigid body dynamics, cloth simulation, and particle effects, enhancing realism and interactivity in 3D environments.
===Mujoco===
MuJoCo (Multi-Joint dynamics with Contact) is a physics engine designed for research in robotics and biomechanics. It's known for its speed, accuracy, and ease of use, making it popular for simulating complex systems with robotics and articulated structures.
==Bullet==
Bullet is a physics engine supporting real-time collision detection and multi-physics simulation for VR, games, visual effects, robotics, machine learning
==Simulators==
===[[Isaac Sim]]===
===[https://github.com/haosulab/ManiSkill ManiSkill]===
===[[VSim]]===
==Training frameworks==
Popular training frameworks are listed here with example applications.
===[https://github.com/NVIDIA-Omniverse/IsaacGymEnvs Isaac Gym]===
Isaac Gym is NVIDIA's robotics simulation tool, part of the Isaac SDK. It leverages GPU acceleration to enable the simulation of thousands of robot bodies simultaneously, making it highly efficient for training machine learning models in robotics. It's designed to streamline robotics applications, focusing on reinforcement learning in a virtual environment.
===[https://gymnasium.farama.org/ Gymnasium]===
Gymnasium is an open-source toolkit for developing and comparing reinforcement learning algorithms. Originally developed by OpenAI as "Gym," it provides a standardized set of environments (like Atari games, robotic simulations, etc.) to test and benchmark AI algorithms. It's widely used in the AI research community to foster innovation and replication in RL studies.
===[[Applications]]===
Over the last decade several advancements have been made in learning locomotion and manipulation skills with simulations. See non-comprehensive list here.
== Training methods ==
===[[Imitation learning]]===
Imitation Learning is a technique where models learn to perform tasks by mimicking expert behaviors. This approach is often used when defining explicit reward functions is challenging. It accelerates learning by using pre-collected datasets of expert demonstrations, reducing the need for trial-and-error in initial learning phases.
===[[Reinforcement Learning]]===
Reinforcement Learning involves agents learning to make decisions by interacting with an environment to maximize cumulative rewards. It's foundational in fields where sequential decision-making is crucial, like gaming, autonomous vehicles, and robotics. RL uses methods like Q-learning and policy gradient to iteratively improve agent performance based on feedback from the environment.
[[Category: Software]]
56e8987d2438cc98a87de485f22761d959498d57
341
326
2024-04-25T19:30:09Z
206.0.71.44
0
/* Learning algorithms */
wikitext
text/x-wiki
Learning algorithms allow to train humanoids to perform different skills such as manipulation or locomotion. Below is an overview of general approaches to training machine learning models for humanoid robots with example [[applications]]. Typically you need a simulator, training framework and machine learning method to train end to end behaviors.
== Physics engines ==
Physics engines are software libraries designed to simulate physical systems in a virtual environment. They are crucial in a variety of fields such as video games, animation, robotics, and engineering simulations. These engines handle the mathematics involved in simulating physical processes like motion, collisions, and fluid dynamics.
For a much more comprehensive overview see [https://simulately.wiki/docs/ Simulately].
===PhysX===
PhysX is a physics engine by NVIDIA used primarily for video games and real-time simulations. It supports rigid body dynamics, cloth simulation, and particle effects, enhancing realism and interactivity in 3D environments.
===Mujoco===
MuJoCo (Multi-Joint dynamics with Contact) is a physics engine designed for research in robotics and biomechanics. It's known for its speed, accuracy, and ease of use, making it popular for simulating complex systems with robotics and articulated structures.
===Bullet===
Bullet is a physics engine supporting real-time collision detection and multi-physics simulation for VR, games, visual effects, robotics, machine learning
==Simulators==
===[[Isaac Sim]]===
===[https://github.com/haosulab/ManiSkill ManiSkill]===
===[[VSim]]===
=Training frameworks=
Popular training frameworks are listed here with example applications.
===[https://github.com/NVIDIA-Omniverse/IsaacGymEnvs Isaac Gym]===
Isaac Gym is NVIDIA's robotics simulation tool, part of the Isaac SDK. It leverages GPU acceleration to enable the simulation of thousands of robot bodies simultaneously, making it highly efficient for training machine learning models in robotics. It's designed to streamline robotics applications, focusing on reinforcement learning in a virtual environment.
===[https://gymnasium.farama.org/ Gymnasium]===
Gymnasium is an open-source toolkit for developing and comparing reinforcement learning algorithms. Originally developed by OpenAI as "Gym," it provides a standardized set of environments (like Atari games, robotic simulations, etc.) to test and benchmark AI algorithms. It's widely used in the AI research community to foster innovation and replication in RL studies.
===[[Applications]]===
Over the last decade several advancements have been made in learning locomotion and manipulation skills with simulations. See non-comprehensive list here.
== Training methods ==
===[[Imitation learning]]===
Imitation Learning is a technique where models learn to perform tasks by mimicking expert behaviors. This approach is often used when defining explicit reward functions is challenging. It accelerates learning by using pre-collected datasets of expert demonstrations, reducing the need for trial-and-error in initial learning phases.
===[[Reinforcement Learning]]===
Reinforcement Learning involves agents learning to make decisions by interacting with an environment to maximize cumulative rewards. It's foundational in fields where sequential decision-making is crucial, like gaming, autonomous vehicles, and robotics. RL uses methods like Q-learning and policy gradient to iteratively improve agent performance based on feedback from the environment.
[[Category: Software]]
2860a19ac7cf857bd829ac7d1b945c47d4afeb94
Atlas
0
81
327
2024-04-25T15:18:45Z
User2024
6
Created page with "Atlas is a humanoid robot from [[Boston Dynamics]]. {{infobox robot | name = Atlas | organization = [[System Technology Works]] | height = 150 cm | weight = 89 kg | single_ha..."
wikitext
text/x-wiki
Atlas is a humanoid robot from [[Boston Dynamics]].
{{infobox robot
| name = Atlas
| organization = [[System Technology Works]]
| height = 150 cm
| weight = 89 kg
| single_hand_payload
| two_hand_payload
| cost =
}}
[[Category:Robots]]
5f953a3de0ea59876ba24e9d7912c57d5cce13c2
328
327
2024-04-25T15:19:20Z
User2024
6
wikitext
text/x-wiki
Atlas is a humanoid robot from [[Boston Dynamics]].
{{infobox robot
| name = Atlas
| organization = [[Boston Dynamics]]
| height = 150 cm
| weight = 89 kg
| single_hand_payload
| two_hand_payload
| cost =
}}
[[Category:Robots]]
2cc9b28fef32d25c31b8b92302082ad76f09f3bb
330
328
2024-04-25T15:20:47Z
User2024
6
wikitext
text/x-wiki
Atlas is a humanoid robot from [[Boston Dynamics]].
{{infobox robot
| name = Atlas
| organization = [[Boston Dynamics]]
| height = 150 cm
| weight = 89 kg
| video_link = https://www.youtube.com/watch?v=29ECwExc-_M
| single_hand_payload
| two_hand_payload
| cost =
}}
[[Category:Robots]]
5e5d99ddbecaa0d4c8e7bb85349d9ca9e1da44e5
Boston Dynamics
0
82
329
2024-04-25T15:19:36Z
User2024
6
Created page with "Boston Dynamics is building a humanoid robot called [[Atlas]]. {{infobox company | name = Boston Dynamics | country = USA | website_link = https://bostondynamics.com/ | robot..."
wikitext
text/x-wiki
Boston Dynamics is building a humanoid robot called [[Atlas]].
{{infobox company
| name = Boston Dynamics
| country = USA
| website_link = https://bostondynamics.com/
| robots = [[Atlas]]
}}
[[Category:Companies]]
5c887274a6346f40f073bfcb90fc8cdbaec1a4a9
HUBO
0
83
331
2024-04-25T15:24:53Z
User2024
6
Created page with "HUBO is a humanoid robot from [[Rainbow Robotics]]. {{infobox robot | name = HUBO | organization = [[Rainbow Robotics]] | height = 170 cm | weight = 80 kg | single_hand_paylo..."
wikitext
text/x-wiki
HUBO is a humanoid robot from [[Rainbow Robotics]].
{{infobox robot
| name = HUBO
| organization = [[Rainbow Robotics]]
| height = 170 cm
| weight = 80 kg
| single_hand_payload
| two_hand_payload
| cost = USD 320,000
}}
[[Category:Robots]]
69530615f7ead20f43ea83dd268609739f2e2a84
333
331
2024-04-25T15:27:07Z
User2024
6
wikitext
text/x-wiki
HUBO is a humanoid robot from [[Rainbow Robotics]].
{{infobox robot
| name = HUBO
| organization = [[Rainbow Robotics]]
| height = 170 cm
| weight = 80 kg
| video_link = https://www.youtube.com/watch?v=r2pKEVTddy4
| single_hand_payload
| two_hand_payload
| cost = USD 320,000
}}
[[Category:Robots]]
dcd976f2bf1ef677404c4a1aa7829410a5926075
Rainbow Robotics
0
84
332
2024-04-25T15:25:31Z
User2024
6
Created page with "Rainbow Robotics is building a humanoid robot called [[HUBO]]. {{infobox company | name = Rainbow Robotics | country = South Korea | website_link = https://www.rainbow-roboti..."
wikitext
text/x-wiki
Rainbow Robotics is building a humanoid robot called [[HUBO]].
{{infobox company
| name = Rainbow Robotics
| country = South Korea
| website_link = https://www.rainbow-robotics.com/
| robots = [[HUBO]]
}}
[[Category:Companies]]
f411e775cf99891b465c825915ae1d1544488ce8
Phoenix
0
53
334
252
2024-04-25T17:25:30Z
Ben
2
wikitext
text/x-wiki
Phoenix is a humanoid robot from [[Sanctuary AI]].
{{infobox robot
| name = Phoenix
| organization = Sanctuary AI
| height = 5 ft 7 in (170 cm)
| weight = 70 kg (155 lbs)
| two_hand_payload = 25
| video_link = https://youtube.com/watch?v=FH3zbUSMAAU
}}
On May 16 2024, Sanctuary released [https://sanctuary.ai/resources/news/sanctuary-ai-unveils-phoenix-a-humanoid-general-purpose-robot-designed-for-work/ Phoenix Gen 7].
[[Category:Robots]]
d7f7c22b8bd19ab3101e0e9897b3273c0e3613c2
336
334
2024-04-25T17:27:07Z
Mrroboto
5
wikitext
text/x-wiki
Phoenix is a humanoid robot from [[Sanctuary AI]].
{{infobox robot
| name = Phoenix
| organization = Sanctuary AI
| height = 5 ft 7 in (170 cm)
| weight = 70 kg (155 lbs)
| two_hand_payload = 25
| video_link = https://youtube.com/watch?v=FH3zbUSMAAU
}}
On May 16 2024, Sanctuary released [https://sanctuary.ai/resources/news/sanctuary-ai-unveils-phoenix-a-humanoid-general-purpose-robot-designed-for-work/ Phoenix Gen 7].
[[File:Main-image-phoenix-annoucement.jpg|thumb]]
[[Category:Robots]]
0e39bdb32d07df4471ea468c91e0a8c378758369
337
336
2024-04-25T17:27:44Z
Mrroboto
5
wikitext
text/x-wiki
Phoenix is a humanoid robot from [[Sanctuary AI]].
{{infobox robot
| name = Phoenix
| organization = Sanctuary AI
| height = 5 ft 7 in (170 cm)
| weight = 70 kg (155 lbs)
| two_hand_payload = 25
| video_link = https://youtube.com/watch?v=FH3zbUSMAAU
}}
On May 16 2024, Sanctuary released [https://sanctuary.ai/resources/news/sanctuary-ai-unveils-phoenix-a-humanoid-general-purpose-robot-designed-for-work/ Phoenix Gen 7].
[[File:Main-image-phoenix-annoucement.jpg|none|500px|Phoenix Gen 7|thumb]]
[[Category:Robots]]
0812ff6b58d7c71aa8fff8323e9461fd5fd0707a
File:Main-image-phoenix-annoucement.jpg
6
85
335
2024-04-25T17:26:59Z
Mrroboto
5
wikitext
text/x-wiki
Phoenix Gen 7 press release image
adb64e395b5464b8cf19a4215f2160c1668ad7e2
Reinforcement Learning
0
34
340
273
2024-04-25T18:56:07Z
2.127.48.230
0
/* Training algorithms */
wikitext
text/x-wiki
==Training algorithms==
===[https://en.wikipedia.org/wiki/Advantage_Actor_Critic A2C]===
===[https://en.wikipedia.org/wiki/Proximal_policy_optimization PPO]===
===[https://spinningup.openai.com/en/latest/algorithms/sac.html SAC]===
e4afe1e8be68aaa87d905516fff02e9d7724ffb2
LimX Dynamics
0
65
344
286
2024-04-25T20:16:45Z
24.130.242.94
0
wikitext
text/x-wiki
LimX Dynamics was founded in 2022 in China. Their first product is a dog robot with wheels for feet, but their second product is a humanoid called [[CL-1]]. [https://www.youtube.com/watch?v=2dmjzMv-y-M Here is a video] showcasing their teleoperation setup to collect training data.
{{infobox company
| name = LimX Dynamics
| country = China
| website_link = https://www.limxdynamics.com/en
| robots = [[CL-1]]
}}
[[Category:Companies]]
e623a16bfe4b575e99dc192c02535fb1d8b84570
Servo Design
0
61
345
263
2024-04-25T21:13:27Z
Ben
2
wikitext
text/x-wiki
This page contains information about how to build a good open-source servo.
=== Open Source Servos ===
* [https://github.com/unhuman-io/obot OBot]
* [https://github.com/atopile/spin-servo-drive SPIN Servo Drive]
=== Commercial Servos ===
* [https://www.myactuator.com/downloads-x-series RMD X Series]
** Based on the MIT Cheetah actuator
** Planetary gears with 1:6 to 1:8 reduction ratios
=== Controllers ===
* [https://github.com/napowderly/mcp2515 MCP2515 tranceiver circuit]
[[Category: Hardware]]
[[Category: Electronics]]
72aa1ee0d9a9aaef0e0f1ea34bbdd94a664ca065
347
345
2024-04-25T21:17:30Z
Ben
2
wikitext
text/x-wiki
This page contains information about how to build a good open-source servo.
=== Open Source Servos ===
* [https://github.com/unhuman-io/obot OBot]
* [https://github.com/atopile/spin-servo-drive SPIN Servo Drive]
=== Commercial Servos ===
* [https://www.myactuator.com/downloads-x-series MyActuator X Series] [[MyActuator X-Series]]
** Based on the MIT Cheetah actuator
** Planetary gears with 1:6 to 1:8 reduction ratios
=== Controllers ===
* [https://github.com/napowderly/mcp2515 MCP2515 tranceiver circuit]
[[Category: Hardware]]
[[Category: Electronics]]
83bcff1ca1e1244b6ea8d3593f8cefe0bd8f542b
Template:Infobox actuator
10
86
348
2024-04-25T21:19:50Z
Ben
2
Created page with "{{infobox | name = {{{name}}} | key1 = Name | value1 = {{{name}}} | key2 = Manufacturer | value2 = {{{manufacturer|}}} | key3 = Price | value3 = {{{price}}} | key4 = Purchase..."
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Manufacturer
| value2 = {{{manufacturer|}}}
| key3 = Price
| value3 = {{{price}}}
| key4 = Purchase
| value4 = {{#if: {{{purchase_link|}}} | [{{{purchase_link}}} Link] }}
| key5 = Torque
| value5 = {{{torque|}}}
| key6 = Mass
| value6 = {{{mass|}}}
| key7 = CAD
| value7 = {{#if: {{{cad_link|}}} | [{{{cad_link}}} Link] }}
| key8 = Size
| value8 = {{{size}}}
}}
58bef1b36076a70d640462ef9b73f017817bde42
349
348
2024-04-25T21:20:00Z
Ben
2
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name|}}}
| key2 = Manufacturer
| value2 = {{{manufacturer|}}}
| key3 = Price
| value3 = {{{price|}}}
| key4 = Purchase
| value4 = {{#if: {{{purchase_link|}}} | [{{{purchase_link}}} Link] }}
| key5 = Torque
| value5 = {{{torque|}}}
| key6 = Mass
| value6 = {{{mass|}}}
| key7 = CAD
| value7 = {{#if: {{{cad_link|}}} | [{{{cad_link}}} Link] }}
| key8 = Size
| value8 = {{{size|}}}
}}
f22d5e96a71156420a7d7e4e0d39f99971caf85f
350
349
2024-04-25T21:20:15Z
Ben
2
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Manufacturer
| value2 = {{{manufacturer|}}}
| key3 = Price
| value3 = {{{price|}}}
| key4 = Purchase
| value4 = {{#if: {{{purchase_link|}}} | [{{{purchase_link}}} Link] }}
| key5 = Torque
| value5 = {{{torque|}}}
| key6 = Mass
| value6 = {{{mass|}}}
| key7 = CAD
| value7 = {{#if: {{{cad_link|}}} | [{{{cad_link}}} Link] }}
| key8 = Size
| value8 = {{{size|}}}
}}
3ef51852acd16ae46f72eed212ea6581a637f308
351
350
2024-04-25T21:22:19Z
Ben
2
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Manufacturer
| value2 = {{{manufacturer|}}}
| key3 = Cost
| value3 = {{{cost|}}}
| key4 = Purchase
| value4 = {{#if: {{{purchase_link|}}} | [{{{purchase_link}}} Link] }}
| key5 = Torque
| value5 = {{{torque|}}}
| key6 = Weight
| value6 = {{{weight|}}}
| key7 = CAD
| value7 = {{#if: {{{cad_link|}}} | [{{{cad_link}}} Link] }}
| key8 = Size
| value8 = {{{size|}}}
}}
635a717a64015a08b816652e1a2a38e02da3b549
Contributing
0
59
352
257
2024-04-25T21:22:23Z
Ben
2
wikitext
text/x-wiki
=== Adding an Infobox ===
There are a few infobox templates which we use:
* [[Template:Infobox_company]]
* [[Template:Infobox_robot]]
* [[Template:Infobox_actuator]]
All of the fields are optional, except for <name>.
To add a company infobox, use:
<syntaxhighlight lang="text">
{{infobox company
| name = Name
| country = Country
| website_link = https://link.com/
| robots = [[Stompy]]
}}
</syntaxhighlight>
To add a robot infobox, use:
<syntaxhighlight lang="text">
{{infobox robot
| name = Name
| organization = Company
| video_link = https://youtube.com/
| cost = USD 1
| height = 10 ft
| weight = 100 kg
| speed = 1 m/s
| lift_force = 10 lb
| battery_life = 5 hr
| battery_capacity = 1 mWh
| purchase_link = https://buy.com
| number_made = 10
| dof = 10
| status = Available
}}
</syntaxhighlight>
To add an actuator infobox, use:
<syntaxhighlight lang="text">
{{infobox robot
| name = Name
| manufacturer = Manufacturer
| cost = USD 1
| purchase_link = https://amazon.com/
| torque = 1 Nm
| weight = 1 kg
| cad_link = https://example.com/model.step
| size = 10cm radius
}}
</syntaxhighlight>
3f382668d5ce61a8aebf74b11b2027f33d7f2d76
Contributing
0
59
353
352
2024-04-25T21:22:43Z
Ben
2
wikitext
text/x-wiki
=== Adding an Infobox ===
There are a few infobox templates which we use:
* [[Template:Infobox_company]]
* [[Template:Infobox_robot]]
* [[Template:Infobox_actuator]]
All of the fields are optional, except for <name>.
To add a company infobox, use:
<syntaxhighlight lang="text">
{{infobox company
| name = Name
| country = Country
| website_link = https://link.com/
| robots = [[Stompy]]
}}
</syntaxhighlight>
To add a robot infobox, use:
<syntaxhighlight lang="text">
{{infobox robot
| name = Name
| organization = Company
| video_link = https://youtube.com/
| cost = USD 1
| height = 10 ft
| weight = 100 kg
| speed = 1 m/s
| lift_force = 10 lb
| battery_life = 5 hr
| battery_capacity = 1 mWh
| purchase_link = https://buy.com
| number_made = 10
| dof = 10
| status = Available
}}
</syntaxhighlight>
To add an actuator infobox, use:
<syntaxhighlight lang="text">
{{infobox actuator
| name = Name
| manufacturer = Manufacturer
| cost = USD 1
| purchase_link = https://amazon.com/
| torque = 1 Nm
| weight = 1 kg
| cad_link = https://example.com/model.step
| size = 10cm radius
}}
</syntaxhighlight>
c5b93881a1150b75249306d62d8f8b2ee1b31d12
355
353
2024-04-25T21:26:45Z
Ben
2
wikitext
text/x-wiki
=== Adding an Infobox ===
There are a few infobox templates which we use:
* [[Template:Infobox_company]]
* [[Template:Infobox_robot]]
* [[Template:Infobox_actuator]]
All of the fields are optional, except for <name>.
To add a company infobox, use:
<syntaxhighlight lang="text">
{{infobox company
| name = Name
| country = Country
| website_link = https://link.com/
| robots = [[Stompy]]
}}
</syntaxhighlight>
To add a robot infobox, use:
<syntaxhighlight lang="text">
{{infobox robot
| name = Name
| organization = Company
| video_link = https://youtube.com/
| cost = USD 1
| height = 10 ft
| weight = 100 kg
| speed = 1 m/s
| lift_force = 10 lb
| battery_life = 5 hr
| battery_capacity = 1 mWh
| purchase_link = https://buy.com
| number_made = 10
| dof = 10
| status = Available
}}
</syntaxhighlight>
To add an actuator infobox, use:
<syntaxhighlight lang="text">
{{infobox actuator
| name = Name
| manufacturer = Manufacturer
| cost = USD 1
| purchase_link = https://amazon.com/
| nominal_torque = 1 Nm
| peak_torque = 1 Nm
| weight = 1 kg
| size = 10cm radius
| gear_ratio = 1:8
| voltage = 48V
| cad_link = https://example.com/model.step
}}
</syntaxhighlight>
4e324da542b23672bcefa4146e2848d7f19612ec
357
355
2024-04-25T21:28:47Z
Ben
2
wikitext
text/x-wiki
=== Adding an Infobox ===
There are a few infobox templates which we use:
* [[Template:Infobox_company]]
* [[Template:Infobox_robot]]
* [[Template:Infobox_actuator]]
All of the fields are optional, except for <name>.
To add a company infobox, use:
<syntaxhighlight lang="text">
{{infobox company
| name = Name
| country = Country
| website_link = https://link.com/
| robots = [[Stompy]]
}}
</syntaxhighlight>
To add a robot infobox, use:
<syntaxhighlight lang="text">
{{infobox robot
| name = Name
| organization = Company
| video_link = https://youtube.com/
| cost = USD 1
| height = 10 ft
| weight = 100 kg
| speed = 1 m/s
| lift_force = 10 lb
| battery_life = 5 hr
| battery_capacity = 1 mWh
| purchase_link = https://buy.com
| number_made = 10
| dof = 10
| status = Available
}}
</syntaxhighlight>
To add an actuator infobox, use:
<syntaxhighlight lang="text">
{{infobox actuator
| name = Name
| manufacturer = Manufacturer
| cost = USD 1
| purchase_link = https://amazon.com/
| nominal_torque = 1 Nm
| peak_torque = 1 Nm
| weight = 1 kg
| size = 10cm radius
| gear_ratio = 1:8
| voltage = 48V
| cad_link = https://example.com/model.step
| interface = CAN
}}
</syntaxhighlight>
17033048c684b22dc1a0e2869d8528d2a59d6986
360
357
2024-04-25T21:30:21Z
Ben
2
wikitext
text/x-wiki
=== Adding an Infobox ===
There are a few infobox templates which we use:
* [[Template:Infobox_company]]
* [[Template:Infobox_robot]]
* [[Template:Infobox_actuator]]
All of the fields are optional, except for <name>.
To add a company infobox, use:
<syntaxhighlight lang="text">
{{infobox company
| name = Name
| country = Country
| website_link = https://link.com/
| robots = [[Stompy]]
}}
</syntaxhighlight>
To add a robot infobox, use:
<syntaxhighlight lang="text">
{{infobox robot
| name = Name
| organization = Company
| video_link = https://youtube.com/
| cost = USD 1
| height = 10 ft
| weight = 100 kg
| speed = 1 m/s
| lift_force = 10 lb
| battery_life = 5 hr
| battery_capacity = 1 mWh
| purchase_link = https://buy.com
| number_made = 10
| dof = 10
| status = Available
}}
</syntaxhighlight>
To add an actuator infobox, use:
<syntaxhighlight lang="text">
{{infobox actuator
| name = Name
| manufacturer = Manufacturer
| cost = USD 1
| purchase_link = https://amazon.com/
| nominal_torque = 1 Nm
| peak_torque = 1 Nm
| weight = 1 kg
| size = 10cm radius
| gear_ratio = 1:8
| voltage = 48V
| cad_link = https://example.com/model.step
| interface = CAN
| gear_type = Planetary
}}
</syntaxhighlight>
4f6bda509f5b849a58c086ab410f05caa0214a22
366
360
2024-04-25T21:34:14Z
Ben
2
wikitext
text/x-wiki
=== Adding an Infobox ===
There are a few infobox templates which we use:
* [[Template:Infobox_company]]
* [[Template:Infobox_robot]]
* [[Template:Infobox_actuator]]
All of the fields are optional, except for <name>.
To add a company infobox, use:
<syntaxhighlight lang="text">
{{infobox company
| name = Name
| country = Country
| website_link = https://link.com/
| robots = [[Stompy]]
}}
</syntaxhighlight>
To add a robot infobox, use:
<syntaxhighlight lang="text">
{{infobox robot
| name = Name
| organization = Company
| video_link = https://youtube.com/
| cost = USD 1
| height = 10 ft
| weight = 100 kg
| speed = 1 m/s
| lift_force = 10 lb
| battery_life = 5 hr
| battery_capacity = 1 mWh
| purchase_link = https://buy.com
| number_made = 10
| dof = 10
| status = Available
}}
</syntaxhighlight>
To add an actuator infobox, use:
<syntaxhighlight lang="text">
{{infobox actuator
| name = Name
| manufacturer = Manufacturer
| cost = USD 1
| purchase_link = https://amazon.com/
| nominal_torque = 1 Nm
| peak_torque = 1 Nm
| weight = 1 kg
| dimensions = 10cm radius
| gear_ratio = 1:8
| voltage = 48V
| cad_link = https://example.com/model.step
| interface = CAN
| gear_type = Planetary
}}
</syntaxhighlight>
54148cd7a2d02bc6896e4d834348028d76ad2ce7
382
366
2024-04-25T22:22:11Z
Ben
2
wikitext
text/x-wiki
=== Adding an Infobox ===
There are a few infobox templates which we use:
* [[Template:Infobox_company]]
* [[Template:Infobox_robot]]
* [[Template:Infobox_actuator]]
All of the fields are optional, except for <name>.
=== Company ===
To add a company infobox, use:
<syntaxhighlight lang="text">
{{infobox company
| name = Name
| country = Country
| website_link = https://link.com/
| robots = [[Stompy]]
}}
</syntaxhighlight>
Here is the blank version:
<syntaxhighlight lang="text">
{{infobox company
| name =
| country =
| website_link =
| robots =
}}
</syntaxhighlight>
=== Robot ===
To add a robot infobox, use:
<syntaxhighlight lang="text">
{{infobox robot
| name = Name
| organization = Company
| video_link = https://youtube.com/
| cost = USD 1
| height = 10 ft
| weight = 100 kg
| speed = 1 m/s
| lift_force = 10 lb
| battery_life = 5 hr
| battery_capacity = 1 mWh
| purchase_link = https://buy.com
| number_made = 10
| dof = 10
| status = Available
}}
</syntaxhighlight>
Here is the blank version:
<syntaxhighlight lang="text">
{{infobox robot
| name =
| organization =
| video_link =
| cost =
| height =
| weight =
| speed =
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof =
| status =
}}
</syntaxhighlight>
=== Actuator ===
To add an actuator infobox, use:
<syntaxhighlight lang="text">
{{infobox actuator
| name = Name
| manufacturer = Manufacturer
| cost = USD 1
| purchase_link = https://amazon.com/
| nominal_torque = 1 Nm
| peak_torque = 1 Nm
| weight = 1 kg
| dimensions = 10cm radius
| gear_ratio = 1:8
| voltage = 48V
| cad_link = https://example.com/model.step
| interface = CAN
| gear_type = Planetary
}}
</syntaxhighlight>
Here is the blank version:
<syntaxhighlight lang="text">
{{infobox actuator
| name =
| manufacturer =
| cost =
| purchase_link =
| nominal_torque =
| peak_torque =
| weight =
| dimensions =
| gear_ratio =
| voltage =
| cad_link =
| interface =
| gear_type =
}}
</syntaxhighlight>
878f0457124f15091d7342af0104814efcb2850a
383
382
2024-04-25T22:22:28Z
Ben
2
wikitext
text/x-wiki
=== Adding an Infobox ===
There are a few infobox templates which we use:
* [[Template:Infobox_company]]
* [[Template:Infobox_robot]]
* [[Template:Infobox_actuator]]
All of the fields are optional, except for <code>name</code>.
=== Company ===
To add a company infobox, use:
<syntaxhighlight lang="text">
{{infobox company
| name = Name
| country = Country
| website_link = https://link.com/
| robots = [[Stompy]]
}}
</syntaxhighlight>
Here is the blank version:
<syntaxhighlight lang="text">
{{infobox company
| name =
| country =
| website_link =
| robots =
}}
</syntaxhighlight>
=== Robot ===
To add a robot infobox, use:
<syntaxhighlight lang="text">
{{infobox robot
| name = Name
| organization = Company
| video_link = https://youtube.com/
| cost = USD 1
| height = 10 ft
| weight = 100 kg
| speed = 1 m/s
| lift_force = 10 lb
| battery_life = 5 hr
| battery_capacity = 1 mWh
| purchase_link = https://buy.com
| number_made = 10
| dof = 10
| status = Available
}}
</syntaxhighlight>
Here is the blank version:
<syntaxhighlight lang="text">
{{infobox robot
| name =
| organization =
| video_link =
| cost =
| height =
| weight =
| speed =
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof =
| status =
}}
</syntaxhighlight>
=== Actuator ===
To add an actuator infobox, use:
<syntaxhighlight lang="text">
{{infobox actuator
| name = Name
| manufacturer = Manufacturer
| cost = USD 1
| purchase_link = https://amazon.com/
| nominal_torque = 1 Nm
| peak_torque = 1 Nm
| weight = 1 kg
| dimensions = 10cm radius
| gear_ratio = 1:8
| voltage = 48V
| cad_link = https://example.com/model.step
| interface = CAN
| gear_type = Planetary
}}
</syntaxhighlight>
Here is the blank version:
<syntaxhighlight lang="text">
{{infobox actuator
| name =
| manufacturer =
| cost =
| purchase_link =
| nominal_torque =
| peak_torque =
| weight =
| dimensions =
| gear_ratio =
| voltage =
| cad_link =
| interface =
| gear_type =
}}
</syntaxhighlight>
26714788e3f6746d7335b11bec9f6a1f454afc55
384
383
2024-04-25T22:22:42Z
Ben
2
wikitext
text/x-wiki
=== Adding an Infobox ===
There are a few infobox templates which we use:
* [[Template:Infobox_company]]
* [[Template:Infobox_robot]]
* [[Template:Infobox_actuator]]
All of the fields are optional, except for <code>name</code>.
=== Company ===
To add a company infobox, use:
<syntaxhighlight lang="text">
{{infobox company
| name = Name
| country = Country
| website_link = https://link.com/
| robots = [[Stompy]]
}}
</syntaxhighlight>
Here is the blank version:
<syntaxhighlight lang="text">
{{infobox company
| name =
| country =
| website_link =
| robots =
}}
</syntaxhighlight>
=== Robot ===
To add a robot infobox, use:
<syntaxhighlight lang="text">
{{infobox robot
| name = Name
| organization = Company
| video_link = https://youtube.com/
| cost = USD 1
| height = 10 ft
| weight = 100 kg
| speed = 1 m/s
| lift_force = 10 lb
| battery_life = 5 hr
| battery_capacity = 1 mWh
| purchase_link = https://buy.com
| number_made = 10
| dof = 10
| status = Available
}}
</syntaxhighlight>
Here is the blank version:
<syntaxhighlight lang="text">
{{infobox robot
| name =
| organization =
| video_link =
| cost =
| height =
| weight =
| speed =
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof =
| status =
}}
</syntaxhighlight>
=== Actuator ===
To add an actuator infobox, use:
<syntaxhighlight lang="text">
{{infobox actuator
| name = Name
| manufacturer = Manufacturer
| cost = USD 1
| purchase_link = https://amazon.com/
| nominal_torque = 1 Nm
| peak_torque = 1 Nm
| weight = 1 kg
| dimensions = 10cm radius
| gear_ratio = 1:8
| voltage = 48V
| cad_link = https://example.com/model.step
| interface = CAN
| gear_type = Planetary
}}
</syntaxhighlight>
Here is the blank version:
<syntaxhighlight lang="text">
{{infobox actuator
| name =
| manufacturer =
| cost =
| purchase_link =
| nominal_torque =
| peak_torque =
| weight =
| dimensions =
| gear_ratio =
| voltage =
| cad_link =
| interface =
| gear_type =
}}
</syntaxhighlight>
e52333e859f5768dee12a579ed8d60cae29a7ffb
Template:Infobox actuator
10
86
354
351
2024-04-25T21:25:55Z
Ben
2
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Manufacturer
| value2 = {{{manufacturer|}}}
| key3 = Cost
| value3 = {{{cost|}}}
| key4 = Purchase
| value4 = {{#if: {{{purchase_link|}}} | [{{{purchase_link}}} Link] }}
| key5 = Nominal Torque
| value5 = {{{nominal_torque|}}}
| key6 = Peak Torque
| value6 = {{{peak_torque|}}}
| key7 = Weight
| value7 = {{{weight|}}}
| key8 = Size
| value8 = {{{size|}}}
| key9 = Gear Ratio
| value9 = {{{gear_ratio|}}}
| key10 = Voltage
| value10 = {{{voltage|}}}
| key11 = CAD
| value11 = {{#if: {{{cad_link|}}} | [{{{cad_link}}} Link] }}
}}
4bb7840ba9f7aa697fa43305d06541956ae925c3
356
354
2024-04-25T21:28:37Z
Ben
2
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Manufacturer
| value2 = {{{manufacturer|}}}
| key3 = Cost
| value3 = {{{cost|}}}
| key4 = Purchase
| value4 = {{#if: {{{purchase_link|}}} | [{{{purchase_link}}} Link] }}
| key5 = Nominal Torque
| value5 = {{{nominal_torque|}}}
| key6 = Peak Torque
| value6 = {{{peak_torque|}}}
| key7 = Weight
| value7 = {{{weight|}}}
| key8 = Size
| value8 = {{{size|}}}
| key9 = Gear Ratio
| value9 = {{{gear_ratio|}}}
| key10 = Voltage
| value10 = {{{voltage|}}}
| key11 = CAD
| value11 = {{#if: {{{cad_link|}}} | [{{{cad_link}}} Link] }}
| key12 = Interface
| value12 = {{{interface|}}}
}}
b95e325a200386749438caa4fb3ce94bf98e1f3e
359
356
2024-04-25T21:30:09Z
Ben
2
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Manufacturer
| value2 = {{{manufacturer|}}}
| key3 = Cost
| value3 = {{{cost|}}}
| key4 = Purchase
| value4 = {{#if: {{{purchase_link|}}} | [{{{purchase_link}}} Link] }}
| key5 = Nominal Torque
| value5 = {{{nominal_torque|}}}
| key6 = Peak Torque
| value6 = {{{peak_torque|}}}
| key7 = Weight
| value7 = {{{weight|}}}
| key8 = Size
| value8 = {{{size|}}}
| key9 = Gear Ratio
| value9 = {{{gear_ratio|}}}
| key10 = Voltage
| value10 = {{{voltage|}}}
| key11 = CAD
| value11 = {{#if: {{{cad_link|}}} | [{{{cad_link}}} Link] }}
| key12 = Interface
| value12 = {{{interface|}}}
| key13 = Gear Type
| value13 = {{{gear_type|}}}
}}
5d2339f5b0278b09dbda9c2e16251b22534c778a
365
359
2024-04-25T21:34:06Z
Ben
2
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Manufacturer
| value2 = {{{manufacturer|}}}
| key3 = Cost
| value3 = {{{cost|}}}
| key4 = Purchase
| value4 = {{#if: {{{purchase_link|}}} | [{{{purchase_link}}} Link] }}
| key5 = Nominal Torque
| value5 = {{{nominal_torque|}}}
| key6 = Peak Torque
| value6 = {{{peak_torque|}}}
| key7 = Weight
| value7 = {{{weight|}}}
| key8 = Dimensions
| value8 = {{{dimensions|}}}
| key9 = Gear Ratio
| value9 = {{{gear_ratio|}}}
| key10 = Voltage
| value10 = {{{voltage|}}}
| key11 = CAD
| value11 = {{#if: {{{cad_link|}}} | [{{{cad_link}}} Link] }}
| key12 = Interface
| value12 = {{{interface|}}}
| key13 = Gear Type
| value13 = {{{gear_type|}}}
}}
8dae25613298696425e69eb00e94f3ea05caee15
Servo Design
0
61
368
347
2024-04-25T21:38:09Z
Ben
2
wikitext
text/x-wiki
This page contains information about how to build a good open-source servo.
=== Open Source Servos ===
* [https://github.com/unhuman-io/obot OBot]
* [https://github.com/atopile/spin-servo-drive SPIN Servo Drive]
=== Commercial Servos ===
* [[MyActuator X-Series]]
** Based on the MIT Cheetah actuator
** Planetary gears with 1:6 to 1:8 reduction ratios
=== Controllers ===
* [https://github.com/napowderly/mcp2515 MCP2515 tranceiver circuit]
[[Category: Hardware]]
[[Category: Electronics]]
71575e1ec9d695a12dea2fd9928c9878906de49f
370
368
2024-04-25T21:39:36Z
Ben
2
wikitext
text/x-wiki
This page contains information about how to build a good open-source servo.
=== Open Source Servos ===
* [https://github.com/unhuman-io/obot OBot]
* [https://github.com/atopile/spin-servo-drive SPIN Servo Drive]
=== Commercial Servos ===
* [[MyActuator X-Series]]
=== Controllers ===
* [https://github.com/napowderly/mcp2515 MCP2515 tranceiver circuit]
[[Category: Hardware]]
[[Category: Electronics]]
44c255afcbd2dcc9cdbc2869e315db16b7643d74
374
370
2024-04-25T21:44:08Z
Ben
2
wikitext
text/x-wiki
This page contains information about how to build a good open-source servo.
=== Open Source Servos ===
* [https://github.com/unhuman-io/obot OBot]
* [https://github.com/atopile/spin-servo-drive SPIN Servo Drive]
=== Commercial Servos ===
* [[MyActuator X-Series]]
=== Controllers ===
* [https://github.com/napowderly/mcp2515 MCP2515 tranceiver circuit]
[[Category: Hardware]]
[[Category: Electronics]]
[[Category: Actuators]]
5a39de92d02d7995c735b97c188d4927aad44abe
393
374
2024-04-26T04:34:01Z
24.143.251.200
0
/* Open Source Servos */
wikitext
text/x-wiki
This page contains information about how to build a good open-source servo.
=== Open Source Servos ===
* [https://github.com/unhuman-io/obot OBot]
* [https://github.com/atopile/spin-servo-drive SPIN Servo Drive]
* [https://github.com/jcchurch13/Mechaduino-Firmware]
=== Commercial Servos ===
* [[MyActuator X-Series]]
=== Controllers ===
* [https://github.com/napowderly/mcp2515 MCP2515 tranceiver circuit]
[[Category: Hardware]]
[[Category: Electronics]]
[[Category: Actuators]]
d81c3e9d117824b04fa838be38f372354c366684
394
393
2024-04-26T04:34:27Z
24.143.251.200
0
/* Open Source Servos */
wikitext
text/x-wiki
This page contains information about how to build a good open-source servo.
=== Open Source Servos ===
* [https://github.com/unhuman-io/obot OBot]
* [https://github.com/atopile/spin-servo-drive SPIN Servo Drive]
* [https://github.com/jcchurch13/Mechaduino-Firmware Mechaduino-Firmware]
=== Commercial Servos ===
* [[MyActuator X-Series]]
=== Controllers ===
* [https://github.com/napowderly/mcp2515 MCP2515 tranceiver circuit]
[[Category: Hardware]]
[[Category: Electronics]]
[[Category: Actuators]]
7e739d24b0350647246e202e93f76439a3bde2df
Main Page
0
1
369
346
2024-04-25T21:39:04Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots. As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[Proxy]]
|
|}
6de6811f280a8a17a373ccd3052b7191df7f8980
378
369
2024-04-25T22:04:38Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[Proxy]]
|
|}
359975aa537be3b7cfc245af75cd93ca69af4145
379
378
2024-04-25T22:15:27Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[Proxy]]
|
|}
35c8766e4ac49aebf4e133782cdf80e8101b8fa9
Category:Actuators
14
88
376
2024-04-25T21:44:39Z
Ben
2
Created page with "This is a category to use for anything related to actuators."
wikitext
text/x-wiki
This is a category to use for anything related to actuators.
ce3d069b2b9b4cf8c4f515982dde5316e6b62563
Booster Robotics
0
62
377
282
2024-04-25T21:46:30Z
Ben
2
wikitext
text/x-wiki
The only online presence of Booster Robotics is their YouTube channel with [https://www.youtube.com/watch?v=SIK2MFIIIXw this video].
{{infobox company
| name = Booster Robotics
| website_link = https://www.youtube.com/watch?v=SIK2MFIIIXw
| robots = [[BR002]]
}}
[[Category:Companies]]
7e7c868bcc2414124de6453e3ad5a66a8e64ae2b
OBot
0
89
380
2024-04-25T22:18:08Z
Ben
2
Created page with "The [https://github.com/unhuman-io/obot OBot] is an open-source robot designed for doing mobile manipulation. {{infobox actuator | name = OBot | voltage = 36V | interface = U..."
wikitext
text/x-wiki
The [https://github.com/unhuman-io/obot OBot] is an open-source robot designed for doing mobile manipulation.
{{infobox actuator
| name = OBot
| voltage = 36V
| interface = USB
}}
5e2a4717f24711679c0402fd2fa47e818ff8a4f8
SPIN Servo
0
90
381
2024-04-25T22:20:07Z
Ben
2
Created page with "The [https://github.com/atopile/spin-servo-drive SPIN Servo] as an open-source actuator developed by [https://atopile.io/ atopile]. {{infobox actuator | name = SPIN Servo | m..."
wikitext
text/x-wiki
The [https://github.com/atopile/spin-servo-drive SPIN Servo] as an open-source actuator developed by [https://atopile.io/ atopile].
{{infobox actuator
| name = SPIN Servo
| manufacturer =
| cost = USD 1
| purchase_link =
| nominal_torque =
| peak_torque =
| weight =
| dimensions =
| gear_ratio =
| voltage =
| cad_link =
| interface = CAN bus
| gear_type =
}}
9013a7fe37a2c07d94cb3ef217e94dd254cf4ea3
385
381
2024-04-25T22:23:31Z
Ben
2
wikitext
text/x-wiki
The [https://github.com/atopile/spin-servo-drive SPIN Servo] as an open-source actuator developed by [https://atopile.io/ atopile].
{{infobox actuator
| name = SPIN Servo
| manufacturer =
| cost = USD 30
| purchase_link =
| nominal_torque =
| peak_torque =
| weight =
| dimensions =
| gear_ratio =
| voltage =
| cad_link =
| interface = CAN bus
| gear_type =
}}
7681fc84784b2183a0019eab5680467a16ca173a
386
385
2024-04-25T22:39:16Z
157.131.153.166
0
wikitext
text/x-wiki
The [https://github.com/atopile/spin-servo-drive SPIN Servo] as an open-source actuator developed by [https://atopile.io/ atopile].
{{infobox actuator
| name = SPIN Servo
| manufacturer = Holry Motor
| cost = USD 30 (BOM)
| purchase_link = https://shop.atopile.io/
| nominal_torque = 0.125nm
| peak_torque =0.375nm
| weight =
| dimensions = 42mmx42mmx60mm
| gear_ratio = Direct drive (bolt on options)
| voltage = 12V-24V
| cad_link = https://github.com/atopile/spin-servo-drive/tree/main/mech
| interface = CAN bus, i2c
| gear_type = N/A
}}
f8ccc0edb7c78ad4dc865f69ea734876d12cc2d0
387
386
2024-04-25T22:40:01Z
157.131.153.166
0
wikitext
text/x-wiki
The [https://github.com/atopile/spin-servo-drive SPIN Servo] as an open-source actuator developed by [https://atopile.io/ atopile].
{{infobox actuator
| name = SPIN Servo
| manufacturer = Holry Motor
| cost = USD 30 (BOM)
| purchase_link = https://shop.atopile.io/
| nominal_torque = 0.125nm
| peak_torque =0.375nm
| weight =
| dimensions = 42mmx42mmx60mm
| gear_ratio = Direct drive (bolt on options)
| voltage = 12V-24V
| cad_link = https://github.com/atopile/spin-servo-drive/tree/main/mech
| interface = CAN bus, i2c
| gear_type = N/A
}}
b2fc8d79d30bf4c52c8b49752be9378bfcb73eb2
388
387
2024-04-25T22:43:57Z
157.131.153.166
0
wikitext
text/x-wiki
The [https://github.com/atopile/spin-servo-drive SPIN Servo] as an open-source actuator developed by [https://atopile.io/ atopile].
{{infobox actuator
| name = SPIN Servo
| manufacturer = Holry Motor
| cost = USD 30 (BOM)
| purchase_link = https://shop.atopile.io/
| nominal_torque = 0.125nm
| peak_torque = 0.375nm
| weight = 311.6g
| dimensions = 42mmx42mmx60mm
| gear_ratio = Direct drive (bolt on options)
| voltage = 12V-24V
| cad_link = https://github.com/atopile/spin-servo-drive/tree/main/mech
| interface = CAN bus, i2c
| gear_type = N/A
}}
6f9d1cb38179435da6678448379cb1185fea11b5
390
388
2024-04-25T22:47:23Z
Narayanp
12
wikitext
text/x-wiki
The [https://github.com/atopile/spin-servo-drive SPIN Servo] as an open-source actuator developed by [https://atopile.io/ atopile].
[[File:Spin.jpg|thumb]]
{{infobox actuator
| name = SPIN Servo
| manufacturer = Holry Motor
| cost = USD 30 (BOM)
| purchase_link = https://shop.atopile.io/
| nominal_torque = 0.125nm
| peak_torque = 0.375nm
| weight = 311.6g
| dimensions = 42mmx42mmx60mm
| gear_ratio = Direct drive (bolt on options)
| voltage = 12V-24V
| cad_link = https://github.com/atopile/spin-servo-drive/tree/main/mech
| interface = CAN bus, i2c
| gear_type = N/A
}}
801ad5ecf42884c4d6aa68565aa766b731c001ce
391
390
2024-04-25T22:59:14Z
24.130.242.94
0
wikitext
text/x-wiki
The [https://github.com/atopile/spin-servo-drive SPIN Servo] as an open-source actuator developed by [https://atopile.io/ atopile].
[[File:Spin.jpg|thumb]]
{{infobox actuator
| name = SPIN Servo
| manufacturer = Holry Motor
| cost = USD 30 (BOM)
| purchase_link = https://shop.atopile.io/
| nominal_torque = 0.125nm
| peak_torque = 0.375nm
| weight = 311.6g
| dimensions = 42mmx42mmx60mm
| gear_ratio = Direct drive (bolt on options)
| voltage = 12V-24V
| cad_link = https://github.com/atopile/spin-servo-drive/tree/main/mech
| interface = CAN bus, i2c
}}
69d4de3f9392354d09974f3447e7b1b76e389ab1
File:Spin.jpg
6
91
389
2024-04-25T22:47:16Z
Narayanp
12
wikitext
text/x-wiki
photo of the spin motor and driver board
7e65d2a207ed16ee253b0d74c01af55e054dbe9a
K-Scale Cluster
0
16
392
54
2024-04-25T23:14:12Z
Ben
2
wikitext
text/x-wiki
The K-Scale Labs cluster is a shared cluster for robotics research. This page contains notes on how to access the cluster.
=== Onboarding ===
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/is_rsa
</syntaxhighlight>
Please inform us if you have any issues.
=== Notes ===
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
6dd51d68e97e0b5ab26b213c3f5013e13d23639c
K1
0
92
395
2024-04-26T09:28:04Z
User2024
6
Created page with "K1 is a humanoid robot from [[Kepler]]. {{infobox robot | name = K1 | organization = [[Kepler]] | height = 178 cm | weight = 85 kg | video_link = https://www.youtube.com/wa..."
wikitext
text/x-wiki
K1 is a humanoid robot from [[Kepler]].
{{infobox robot
| name = K1
| organization = [[Kepler]]
| height = 178 cm
| weight = 85 kg
| video_link = https://www.youtube.com/watch?v=A5vshTgDbKE
| cost
}}
[[Category:Robots]]
bfc0cbb9186649495e5f82e83ec0db55203ff404
Kepler
0
93
396
2024-04-26T09:28:29Z
User2024
6
Created page with "Kepler is building a humanoid robot called [[K1]]. {{infobox company | name = Kepler | country = China | website_link = https://www.gotokepler.com/ | robots = [[K1]] }} Ca..."
wikitext
text/x-wiki
Kepler is building a humanoid robot called [[K1]].
{{infobox company
| name = Kepler
| country = China
| website_link = https://www.gotokepler.com/
| robots = [[K1]]
}}
[[Category:Companies]]
7827d7a0f8cc8024557ff34624c3bbdd8b007b5d
Kaleido
0
94
397
2024-04-26T09:32:04Z
User2024
6
Created page with "Kaleido is a humanoid robot from [[Kawasaki Robotics]]. {{infobox robot | name = Kaleido | organization = [[Kawasaki Robotics]] | height = 179 cm | weight = 83 kg | two_han..."
wikitext
text/x-wiki
Kaleido is a humanoid robot from [[Kawasaki Robotics]].
{{infobox robot
| name = Kaleido
| organization = [[Kawasaki Robotics]]
| height = 179 cm
| weight = 83 kg
| two_hand_payload = 60
| video_link = https://www.youtube.com/watch?v=_h66xSbIEdU
| cost
}}
[[Category:Robots]]
d109d73f886500e4dd0ce2a1b18306c4a00d6085
Kawasaki Robotics
0
95
398
2024-04-26T09:32:23Z
User2024
6
Created page with "Kawasaki Robotics is building a humanoid robot called [[Kaleido]]. {{infobox company | name = Kawasaki Robotics | country = Japan | website_link = https://kawasakirobotics.co..."
wikitext
text/x-wiki
Kawasaki Robotics is building a humanoid robot called [[Kaleido]].
{{infobox company
| name = Kawasaki Robotics
| country = Japan
| website_link = https://kawasakirobotics.com/
| robots = [[Kaleido]]
}}
[[Category:Companies]]
4c3203aee0a2d49bd2447b02185e2e260905f5ce
400
398
2024-04-26T09:34:19Z
User2024
6
wikitext
text/x-wiki
Kawasaki Robotics is building humanoid robots called [[Kaleido]] and [[Friends]].
{{infobox company
| name = Kawasaki Robotics
| country = Japan
| website_link = https://kawasakirobotics.com/
| robots = [[Kaleido]], [[Friends]]
}}
[[Category:Companies]]
8b6e10adc633599736ec4b62e4a8a3a788d5dfe7
Friends
0
96
399
2024-04-26T09:33:39Z
User2024
6
Created page with "Friends is a humanoid robot from [[Kawasaki Robotics]]. {{infobox robot | name = Friends | organization = [[Kawasaki Robotics]] | height = 168 cm | weight = 54 kg | two_han..."
wikitext
text/x-wiki
Friends is a humanoid robot from [[Kawasaki Robotics]].
{{infobox robot
| name = Friends
| organization = [[Kawasaki Robotics]]
| height = 168 cm
| weight = 54 kg
| two_hand_payload = 10
| video_link = https://www.youtube.com/watch?v=dz4YLbgbVvc
| cost
}}
[[Category:Robots]]
28f51a1d1f16363804e8d537abc3641d19f4af33
Kangaroo
0
97
401
2024-04-26T09:37:03Z
User2024
6
Created page with "Kangaroo is a humanoid robot from [[PAL Robotics]]. {{infobox robot | name = Kangaroo | organization = [[PAL Robotics]] | height = 145 cm | weight = 40 kg | two_hand_payloa..."
wikitext
text/x-wiki
Kangaroo is a humanoid robot from [[PAL Robotics]].
{{infobox robot
| name = Kangaroo
| organization = [[PAL Robotics]]
| height = 145 cm
| weight = 40 kg
| two_hand_payload
| video_link = https://www.youtube.com/watch?v=TU9q6j8KJGU&t=63s
| cost
}}
[[Category:Robots]]
2a8f82243e0152042b311092c8c18d5bf36fab6b
PAL Robotics
0
98
402
2024-04-26T09:37:29Z
User2024
6
Created page with "PAL Robotics is building a humanoid robot called [[Kangaroo]]. {{infobox company | name = PAL Robotics | country = Spain | website_link = https://pal-robotics.com/ | robots =..."
wikitext
text/x-wiki
PAL Robotics is building a humanoid robot called [[Kangaroo]].
{{infobox company
| name = PAL Robotics
| country = Spain
| website_link = https://pal-robotics.com/
| robots = [[Kangaroo]]
}}
[[Category:Companies]]
a9ffd1d474c012ca8be8a126a748657828473e8c
404
402
2024-04-26T09:40:23Z
User2024
6
wikitext
text/x-wiki
PAL Robotics is building humanoid robots called [[Kangaroo]], [[REEM-C]].
{{infobox company
| name = PAL Robotics
| country = Spain
| website_link = https://pal-robotics.com/
| robots = [[Kangaroo]], [[REEM-C]]
}}
[[Category:Companies]]
f3ca92907f0823f29877e12eb304147fd3c4836c
406
404
2024-04-26T09:41:59Z
User2024
6
wikitext
text/x-wiki
PAL Robotics is building humanoid robots called [[Kangaroo]], [[REEM-C]] and [[TALOS]].
{{infobox company
| name = PAL Robotics
| country = Spain
| website_link = https://pal-robotics.com/
| robots = [[Kangaroo]], [[REEM-C]], [[TALOS]]
}}
[[Category:Companies]]
25b428f4604c694759e788051599dea01385d853
REEM-C
0
99
403
2024-04-26T09:39:14Z
User2024
6
Created page with "REEM-C is a humanoid robot from [[PAL Robotics]]. {{infobox robot | name = REEM-C | organization = [[PAL Robotics]] | height = 165 cm | weight = 80 kg | single_hand_payload..."
wikitext
text/x-wiki
REEM-C is a humanoid robot from [[PAL Robotics]].
{{infobox robot
| name = REEM-C
| organization = [[PAL Robotics]]
| height = 165 cm
| weight = 80 kg
| single_hand_payload = 1
| two_hand_payload
| video_link = https://www.youtube.com/watch?v=lqxTov7isio
| cost
}}
[[Category:Robots]]
d931cc751c79a7d21da55638e744ff3106fa4be5
TALOS
0
100
405
2024-04-26T09:41:33Z
User2024
6
Created page with "TALOS is a humanoid robot from [[PAL Robotics]]. {{infobox robot | name = TALOS | organization = [[PAL Robotics]] | height = 175 cm | weight = 95 kg | single_hand_payload =..."
wikitext
text/x-wiki
TALOS is a humanoid robot from [[PAL Robotics]].
{{infobox robot
| name = TALOS
| organization = [[PAL Robotics]]
| height = 175 cm
| weight = 95 kg
| single_hand_payload = 6
| two_hand_payload
| video_link = https://www.youtube.com/watch?v=xUeApfMAKAE
| cost
}}
[[Category:Robots]]
55891c8df21918b1ece73ea95c301a21e695a70e
Kuavo
0
101
407
2024-04-26T09:46:14Z
User2024
6
Created page with "Kuavo is a humanoid robot from [[LEJUROBOT ]]. {{infobox robot | name = Kuavo | organization = [[LEJUROBOT]] | height = | weight = 45 kg | single_hand_payload | two_hand_pay..."
wikitext
text/x-wiki
Kuavo is a humanoid robot from [[LEJUROBOT ]].
{{infobox robot
| name = Kuavo
| organization = [[LEJUROBOT]]
| height =
| weight = 45 kg
| single_hand_payload
| two_hand_payload
| video_link = https://www.youtube.com/watch?v=Rx1h59y01GY
| cost
}}
[[Category:Robots]]
54fea96758c9519d1c8ddccc0f3decb3d72feff9
LEJUROBOT
0
102
408
2024-04-26T09:46:33Z
User2024
6
Created page with "LEJUROBOT is building a humanoid robot called [[Kuavo]]. {{infobox company | name = LEJUROBOT | country = China | website_link = https://www.lejurobot.com/ | robots = Kuav..."
wikitext
text/x-wiki
LEJUROBOT is building a humanoid robot called [[Kuavo]].
{{infobox company
| name = LEJUROBOT
| country = China
| website_link = https://www.lejurobot.com/
| robots = [[Kuavo]]
}}
[[Category:Companies]]
fd19716d4f09956612746a5fb6677549bc08ef95
MagicBot
0
103
409
2024-04-26T09:49:06Z
User2024
6
Created page with "MagicBot is a humanoid robot from [[MagicLab, DREAME]]. {{infobox robot | name = MagicBot | organization = [[MagicLab, DREAME]] | height = 178 cm | weight = 56 kg | single_h..."
wikitext
text/x-wiki
MagicBot is a humanoid robot from [[MagicLab, DREAME]].
{{infobox robot
| name = MagicBot
| organization = [[MagicLab, DREAME]]
| height = 178 cm
| weight = 56 kg
| single_hand_payload
| two_hand_payload
| video_link = https://www.youtube.com/watch?v=NTPmiDrHv4E&t=2s
| cost
}}
[[Category:Robots]]
e881f8e6b334297147f19313b165d13cbe39a98c
MagicLab, DREAME
0
104
410
2024-04-26T09:49:33Z
User2024
6
Created page with "MagicLab, DREAME is building a humanoid robot called [[MagicBot]]. {{infobox company | name = MagicLab, DREAME | country = China | website_link = | robots = [[MagicBot]] }}..."
wikitext
text/x-wiki
MagicLab, DREAME is building a humanoid robot called [[MagicBot]].
{{infobox company
| name = MagicLab, DREAME
| country = China
| website_link =
| robots = [[MagicBot]]
}}
[[Category:Companies]]
44471abf7a6d74abcbd2211e7d3b9ffeb462605d
MenteeBot (Robot)
0
105
411
2024-04-26T09:52:05Z
User2024
6
Created page with "MenteeBot (Robot) is a humanoid robot from [[MenteeBot]]. {{infobox robot | name = MenteeBot (Robot) | organization = [[MenteeBot]] | height = 175 cm | weight = 70 kg | sing..."
wikitext
text/x-wiki
MenteeBot (Robot) is a humanoid robot from [[MenteeBot]].
{{infobox robot
| name = MenteeBot (Robot)
| organization = [[MenteeBot]]
| height = 175 cm
| weight = 70 kg
| single_hand_payload
| two_hand_payload = 25
| video_link = https://www.youtube.com/watch?v=zJTf4JhGSsI
| cost
}}
[[Category:Robots]]
703b47fdda3299a6dfb075609150667414b2fb9e
MenteeBot
0
106
412
2024-04-26T09:52:18Z
User2024
6
Created page with "MenteeBot is building a humanoid robot called [[MenteeBot (Robot)]]. {{infobox company | name = MenteeBot | country = Israel | website_link = https://www.menteebot.com/ | ro..."
wikitext
text/x-wiki
MenteeBot is building a humanoid robot called [[MenteeBot (Robot)]].
{{infobox company
| name = MenteeBot
| country = Israel
| website_link = https://www.menteebot.com/
| robots = [[MenteeBot (Robot)]]
}}
[[Category:Companies]]
d6961a3aab5a6bf59fcdcd5d0ac420d885dfcc71
Mona
0
107
413
2024-04-26T09:59:46Z
User2024
6
Created page with "Mona is a humanoid robot from [[Kind Humanoid]]. {{infobox robot | name = Mona | organization = [[Kind Humanoid]] | height = | weight = | single_hand_payload | two_hand_pay..."
wikitext
text/x-wiki
Mona is a humanoid robot from [[Kind Humanoid]].
{{infobox robot
| name = Mona
| organization = [[Kind Humanoid]]
| height =
| weight =
| single_hand_payload
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=LsKOpooGK4Q
| cost
}}
[[Category:Robots]]
60d89c2e2198842d1a281125d4b4534881c794a0
Kind Humanoid
0
108
414
2024-04-26T10:00:24Z
User2024
6
Created page with "Kind Humanoid is building a humanoid robot called [[Mona]]. {{infobox company | name = Kind Humanoid | country = USA | website_link = https://www.kindhumanoid.com/ | robots =..."
wikitext
text/x-wiki
Kind Humanoid is building a humanoid robot called [[Mona]].
{{infobox company
| name = Kind Humanoid
| country = USA
| website_link = https://www.kindhumanoid.com/
| robots = [[Mona]]
}}
[[Category:Companies]]
5502863c1226cb565757715388a3c725fb6e2169
Nadia
0
109
415
2024-04-26T10:02:49Z
User2024
6
Created page with "Nadia is a humanoid robot from [[IHMC, Boardwalk Robotics]]. {{infobox robot | name = Nadia | organization = [[IHMC, Boardwalk Robotics]] | height = | weight = | single_han..."
wikitext
text/x-wiki
Nadia is a humanoid robot from [[IHMC, Boardwalk Robotics]].
{{infobox robot
| name = Nadia
| organization = [[IHMC, Boardwalk Robotics]]
| height =
| weight =
| single_hand_payload
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=uTmUfOc7r_s
| cost
}}
[[Category:Robots]]
1a0b3687b268c0ab1789334eac37b6a6736c96a0
IHMC, Boardwalk Robotics
0
110
416
2024-04-26T10:02:59Z
User2024
6
Created page with "IHMC, Boardwalk Robotics is building a humanoid robot called [[Nadia]]. {{infobox company | name = IHMC, Boardwalk Robotics | country = USA | website_link = https://www.ihmc...."
wikitext
text/x-wiki
IHMC, Boardwalk Robotics is building a humanoid robot called [[Nadia]].
{{infobox company
| name = IHMC, Boardwalk Robotics
| country = USA
| website_link = https://www.ihmc.us/
| robots = [[Nadia]]
}}
[[Category:Companies]]
507bc2542ec48cd56549666fa8dfd635ff17cfc0
PX5
0
111
417
2024-04-26T10:19:25Z
User2024
6
Created page with "PX5 is a humanoid robot from [[Xpeng]]. {{infobox robot | name = PX5 | organization = [[Xpeng]] | height = | weight = | single_hand_payload | two_hand_payload = | video_li..."
wikitext
text/x-wiki
PX5 is a humanoid robot from [[Xpeng]].
{{infobox robot
| name = PX5
| organization = [[Xpeng]]
| height =
| weight =
| single_hand_payload
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=BNSZ8Fwcd20
| cost
}}
[[Category:Robots]]
eb6b4deda29a3f5cdc249dfadfac6f59d9b537ed
Xpeng
0
112
418
2024-04-26T10:19:38Z
User2024
6
Created page with "Xpeng is building a humanoid robot called [[PX5]]. {{infobox company | name = Xpeng | country = China | website_link = https://www.xpeng.com/ | robots = [[PX5]] }} Categor..."
wikitext
text/x-wiki
Xpeng is building a humanoid robot called [[PX5]].
{{infobox company
| name = Xpeng
| country = China
| website_link = https://www.xpeng.com/
| robots = [[PX5]]
}}
[[Category:Companies]]
d8b2631a560f611b427ba61fa4f6314ae950e5d3
Main Page
0
1
419
379
2024-04-26T13:02:59Z
45.27.55.241
0
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[XPENG]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|}
5b6eb7505c8b3e1d23ff8c471421a0da64d60f2f
SuperDroid Robots
0
113
420
2024-04-26T13:07:06Z
45.27.55.241
0
SuperDroid Robots - Building Robots Before It Was Cool
wikitext
text/x-wiki
SuperDroid Robots We’re not your typical robot startup because we’ve been doing this for more than 20 years. Over that time, we’ve built thousands of custom robots, fully assembled robots, and robot kits for nearly every industry and we have clients spanning the globe in over 110 countries.
Our mission is to make advanced mobile robots accessible.
Our Humanoid product is called [[Rocky]].
{{infobox company
| name = SuperDroid Robots
| country = USA
| website_link = https://superdroidrobots.com/
| robots = [[Rocky]]
}}
[[Category:Companies]]
0265136212a3eebe0daec003a3f297589628296c
Rocky
0
114
421
2024-04-26T13:14:41Z
45.27.55.241
0
SuperDroid Robots - Rocky
wikitext
text/x-wiki
Rocky is [[SuperDroid Robots]] most versatile robot platform which can move on any terrain, climb stairs, and step over obstacles. We are planning to open-source the humanoid once it is publicly walking and interacting with physical objects.
{{infobox robot
| name = Rocky
| organization = [[SuperDroid Robots]]
| height = 64 in
| width= 24 in
| length = 12 in
| weight = 120 pounds
| locomotion control = Deep Reinforcement Learning
| software framework = ROS2
| V1 video_link = https://www.youtube.com/watch?v=MvAS4AsMvCI
| V2 video_link =https://twitter.com/stevenuecke/status/1707899032973033690
}}
[[Category:Robots]]
d1a293a5c21242e24ed0b45a5bc81d5a4c5e0add
422
421
2024-04-26T13:15:37Z
45.27.55.241
0
wikitext
text/x-wiki
Rocky is [[SuperDroid Robots]] most versatile robot platform which can move on any terrain, climb stairs, and step over obstacles. We are planning to open-source the humanoid once it is publicly walking and interacting with physical objects.
{{infobox robot
| name = Rocky
| organization = [[SuperDroid Robots]]
| height = 64 in
| width= 24 in
| length = 12 in
| weight = 120 pounds
| locomotion control = Deep Reinforcement Learning
| software framework = ROS2
| video_link = https://www.youtube.com/watch?v=MvAS4AsMvCI
| video_link =https://twitter.com/stevenuecke/status/1707899032973033690
}}
[[Category:Robots]]
e64c4426305eafc321e9638248fb733fa18edbf6
423
422
2024-04-26T13:20:56Z
45.27.55.241
0
wikitext
text/x-wiki
Rocky is [[SuperDroid Robots]] most versatile robot platform which can move on any terrain, climb stairs, and step over obstacles. We are planning to open-source the humanoid once it is publicly walking and interacting with physical objects.
{{infobox robot
| name = Rocky
| organization = [[SuperDroid Robots]]
| video_link = https://www.youtube.com/watch?v=MvAS4AsMvCI , https://twitter.com/stevenuecke/status/1707899032973033690
| cost = $75,000
| height = 64 in
| weight = 120 pounds
| speed =
| lift_force = 150 lbs
| battery_life = 8 hours
| battery_capacity = 3,600 Wh
| purchase_link = https://www.superdroidrobots.com/humanoid-biped-robot/
| number_made = 1
| dof = 22
| status = Finishing sim-2-real for motors
}}
[[Category:Robots]]
9601cbc5bbdd23f2eb13887b081a40d1c17bdd7c
424
423
2024-04-26T13:21:39Z
45.27.55.241
0
wikitext
text/x-wiki
Rocky is [[SuperDroid Robots]] most versatile robot platform which can move on any terrain, climb stairs, and step over obstacles. We are planning to open-source the humanoid once it is publicly walking and interacting with physical objects.
{{infobox robot
| name = Rocky
| organization = [[SuperDroid Robots]]
| video_link = V1 https://www.youtube.com/watch?v=MvAS4AsMvCI V2 https://twitter.com/stevenuecke/status/1707899032973033690
| cost = $75,000
| height = 64 in
| weight = 120 pounds
| speed =
| lift_force = 150 lbs
| battery_life = 8 hours
| battery_capacity = 3,600 Wh
| purchase_link = https://www.superdroidrobots.com/humanoid-biped-robot/
| number_made = 1
| dof = 22
| status = Finishing sim-2-real for motors
}}
[[Category:Robots]]
bd104b29463b37fc3e7fde7e2f780ac2da95fce1
Category:Teleop
14
60
425
300
2024-04-26T18:37:28Z
136.62.52.52
0
wikitext
text/x-wiki
= Teleop =
Teleoperation is the art of controlling a robot from a distance (prefix "tele-" comes from the Ancient Greek word tēle, which means "far off, at a distance, far away, far from").
Robots specifically designed to be teleoperated by a human are known as a Proxy.
== Whole Body Teleop ==
In whole-body teleop, the user controls a hardware replica of the robot directly. The real robot directly mimcis the pose of the replica robot.
* https://mobile-aloha.github.io/
* https://www.youtube.com/watch?v=PFw5hwNVhbA
* https://x.com/haoshu_fang/status/1707434624413306955
* https://x.com/aditya_oberai/status/1762637503495033171
* https://what-is-proxy.com
== VR Teleop ==
In VR teleop the user controls a simulated version of the robot. The robot and VR headset are usually on the same LAN and communication is done via gRPC, WebRTC, ZMQ.
* https://github.com/Improbable-AI/VisionProTeleop
* https://github.com/fazildgr8/VR_communication_mujoco200
* https://github.com/pollen-robotics/reachy2021-unity-package
* https://freetale.medium.com/unity-grpc-in-2023-98b739cb115
* https://x.com/AndreTI/status/1780665435999924343
* https://holo-dex.github.io/
* https://what-is-proxy.com
* https://github.com/ToruOwO/hato/tree/main?tab=readme-ov-file#collecting-demonstration-data
== Controller Teleop ==
In controlelr teleop, the user uses joysticks, keyborads, and other controllers to control the robot. The buttons map to hardcoded behaviors on the robot.
* https://github.com/ros-teleop
* https://x.com/ShivinDass/status/1606156894271197184
* https://what-is-proxy.com
== Latency ==
* https://ennerf.github.io/2016/09/20/A-Practical-Look-at-Latency-in-Robotics-The-Importance-of-Metrics-and-Operating-Systems.html
* https://link.springer.com/article/10.1007/s10846-022-01749-3
* https://twitter.com/watneyrobotics/status/1769058250731999591
[[Category: Teleop]]
6ee0ac77e73513902bbaa94f5f1cc5743b6c0259
426
425
2024-04-26T18:44:39Z
136.62.52.52
0
wikitext
text/x-wiki
= Teleop =
Teleoperation is the art of controlling a robot from a distance (prefix "tele-" comes from the Ancient Greek word tēle, which means "far off, at a distance, far away, far from").
Robots specifically designed to be teleoperated by a human are known as a Proxy.
== Whole Body Teleop ==
In whole-body teleop, the user controls a hardware replica of the robot directly. The real robot directly mimcis the pose of the replica robot.
* https://mobile-aloha.github.io/
* https://www.youtube.com/watch?v=PFw5hwNVhbA
* https://x.com/haoshu_fang/status/1707434624413306955
* https://x.com/aditya_oberai/status/1762637503495033171
* https://what-is-proxy.com
* https://github.com/wuphilipp/gello_software
== VR Teleop ==
In VR teleop the user controls a simulated version of the robot. The robot and VR headset are usually on the same LAN and communication is done via gRPC, WebRTC, ZMQ.
* https://github.com/Improbable-AI/VisionProTeleop
* https://github.com/fazildgr8/VR_communication_mujoco200
* https://github.com/pollen-robotics/reachy2021-unity-package
* https://freetale.medium.com/unity-grpc-in-2023-98b739cb115
* https://x.com/AndreTI/status/1780665435999924343
* https://holo-dex.github.io/
* https://what-is-proxy.com
* https://github.com/ToruOwO/hato/tree/main?tab=readme-ov-file#collecting-demonstration-data
* https://github.com/rail-berkeley/oculus_reader
== Controller Teleop ==
In controlelr teleop, the user uses joysticks, keyborads, and other controllers to control the robot. The buttons map to hardcoded behaviors on the robot.
* https://github.com/ros-teleop
* https://x.com/ShivinDass/status/1606156894271197184
* https://what-is-proxy.com
== Latency ==
* https://ennerf.github.io/2016/09/20/A-Practical-Look-at-Latency-in-Robotics-The-Importance-of-Metrics-and-Operating-Systems.html
* https://link.springer.com/article/10.1007/s10846-022-01749-3
* https://twitter.com/watneyrobotics/status/1769058250731999591
[[Category: Teleop]]
cadfac703cbd83825001628e0799792594708550
427
426
2024-04-26T20:43:30Z
136.62.52.52
0
/* VR Teleop */
wikitext
text/x-wiki
= Teleop =
Teleoperation is the art of controlling a robot from a distance (prefix "tele-" comes from the Ancient Greek word tēle, which means "far off, at a distance, far away, far from").
Robots specifically designed to be teleoperated by a human are known as a Proxy.
== Whole Body Teleop ==
In whole-body teleop, the user controls a hardware replica of the robot directly. The real robot directly mimcis the pose of the replica robot.
* https://mobile-aloha.github.io/
* https://www.youtube.com/watch?v=PFw5hwNVhbA
* https://x.com/haoshu_fang/status/1707434624413306955
* https://x.com/aditya_oberai/status/1762637503495033171
* https://what-is-proxy.com
* https://github.com/wuphilipp/gello_software
== VR Teleop ==
In VR teleop the user controls a simulated version of the robot. The robot and VR headset are usually on the same LAN and communication is done via gRPC, WebRTC, ZMQ.
* https://github.com/Improbable-AI/VisionProTeleop
* https://github.com/fazildgr8/VR_communication_mujoco200
* https://github.com/pollen-robotics/reachy2021-unity-package
* https://freetale.medium.com/unity-grpc-in-2023-98b739cb115
* https://x.com/AndreTI/status/1780665435999924343
* https://holo-dex.github.io/
* https://what-is-proxy.com
* https://github.com/ToruOwO/hato/tree/main?tab=readme-ov-file#collecting-demonstration-data
* https://github.com/rail-berkeley/oculus_reader
* https://github.com/OpenTeleVision/TeleVision/tree/main
== Controller Teleop ==
In controlelr teleop, the user uses joysticks, keyborads, and other controllers to control the robot. The buttons map to hardcoded behaviors on the robot.
* https://github.com/ros-teleop
* https://x.com/ShivinDass/status/1606156894271197184
* https://what-is-proxy.com
== Latency ==
* https://ennerf.github.io/2016/09/20/A-Practical-Look-at-Latency-in-Robotics-The-Importance-of-Metrics-and-Operating-Systems.html
* https://link.springer.com/article/10.1007/s10846-022-01749-3
* https://twitter.com/watneyrobotics/status/1769058250731999591
[[Category: Teleop]]
79ef74d0e34e2ba9038330d23b8f27b91e43607d
THEMIS
0
115
428
2024-04-27T08:23:36Z
User2024
6
Created page with "THEMIS is a humanoid robot from [[Westwood Robotics]]. {{infobox robot | name = THEMIS | organization = [[Westwood Robotics]] | height = 142.2 cm | weight = 39 kg | video_lin..."
wikitext
text/x-wiki
THEMIS is a humanoid robot from [[Westwood Robotics]].
{{infobox robot
| name = THEMIS
| organization = [[Westwood Robotics]]
| height = 142.2 cm
| weight = 39 kg
| video_link = https://www.youtube.com/watch?v=yt4mHwAl9cc
| cost =
}}
[[Category:Robots]]
6814e6dc170d59ae71cc28f086056ef1a8e19305
Westwood Robotics
0
116
429
2024-04-27T08:23:59Z
User2024
6
Created page with "Westwood Robotics is building a humanoid robot called [[THEMIS]]. {{infobox company | name = Westwood Robotics | country = USA | website_link = https://www.westwoodrobotics.i..."
wikitext
text/x-wiki
Westwood Robotics is building a humanoid robot called [[THEMIS]].
{{infobox company
| name = Westwood Robotics
| country = USA
| website_link = https://www.westwoodrobotics.io/
| robots = [[THEMIS]]
}}
[[Category:Companies]]
39ccc919272594359f47e209bf225c5392b49d6b
Valkyrie
0
117
430
2024-04-27T08:26:51Z
User2024
6
Created page with "Valkyrie is a humanoid robot from [[NASA]]. {{infobox robot | name = Valkyrie | organization = [[NASA]] | height = 190 cm | weight = 125 kg | video_link = https://www.youtube..."
wikitext
text/x-wiki
Valkyrie is a humanoid robot from [[NASA]].
{{infobox robot
| name = Valkyrie
| organization = [[NASA]]
| height = 190 cm
| weight = 125 kg
| video_link = https://www.youtube.com/watch?v=LaYlQYHXJio
| cost =
}}
[[Category:Robots]]
7e98af5461e6cd0fbf780e586c061af59c2fc5f6
NASA
0
118
431
2024-04-27T08:27:06Z
User2024
6
Created page with "NASA is building a humanoid robot called [[Valkyrie]]. {{infobox company | name = NASA | country = USA | website_link = https://www.nasa.gov/ | robots = [[Valkyrie]] }} Ca..."
wikitext
text/x-wiki
NASA is building a humanoid robot called [[Valkyrie]].
{{infobox company
| name = NASA
| country = USA
| website_link = https://www.nasa.gov/
| robots = [[Valkyrie]]
}}
[[Category:Companies]]
d99ef6c93ca6d34f7365fea5ee900f9a2d1fe6ae
T1
0
119
432
2024-04-27T08:30:12Z
User2024
6
Created page with "T1 is a humanoid robot from [[FDROBOT]]. {{infobox robot | name = T1 | organization = [[FDROBOT]] | height = 160 cm | weight = 43 kg | single_hand_payload = 145 | video_link..."
wikitext
text/x-wiki
T1 is a humanoid robot from [[FDROBOT]].
{{infobox robot
| name = T1
| organization = [[FDROBOT]]
| height = 160 cm
| weight = 43 kg
| single_hand_payload = 145
| video_link =
| cost =
}}
[[Category:Robots]]
20252dac694875d5b758d42d6348970435b549ae
FDROBOT
0
120
433
2024-04-27T08:30:22Z
User2024
6
Created page with "FDROBOT is building a humanoid robot called [[T1]]. {{infobox company | name = FDROBOT | country = China | website_link = https://twitter.com/FDROBOT192380 | robots = [[T1]]..."
wikitext
text/x-wiki
FDROBOT is building a humanoid robot called [[T1]].
{{infobox company
| name = FDROBOT
| country = China
| website_link = https://twitter.com/FDROBOT192380
| robots = [[T1]]
}}
[[Category:Companies]]
6250f7f31b0b7d0e1802c739d785a446720155b5
Figure 01
0
121
434
2024-04-27T08:34:33Z
User2024
6
Created page with "Figure 01 is a humanoid robot from [[Figure AI]]. {{infobox robot | name = Figure 01 | organization = [[Figure AI]] | height = 167.6 cm | weight = 60 kg | payload = 20 kg | r..."
wikitext
text/x-wiki
Figure 01 is a humanoid robot from [[Figure AI]].
{{infobox robot
| name = Figure 01
| organization = [[Figure AI]]
| height = 167.6 cm
| weight = 60 kg
| payload = 20 kg
| runtime = 5 Hrs
| speed = 1.2 M/s
| video_link = https://www.youtube.com/watch?v=48qL8Jt39Vs
| cost =
}}
[[Category:Robots]]
ccd62749fab3b126bfe82c3f3a58b315c7b1895b
Figure AI
0
122
435
2024-04-27T08:35:00Z
User2024
6
Created page with "Figure AI is building a humanoid robot called [[Figure 01]]. {{infobox company | name = Figure AI | country = USA | website_link = https://www.figure.ai/ | robots = Figure..."
wikitext
text/x-wiki
Figure AI is building a humanoid robot called [[Figure 01]].
{{infobox company
| name = Figure AI
| country = USA
| website_link = https://www.figure.ai/
| robots = [[Figure 01]]
}}
[[Category:Companies]]
9eb6316809fbb97d1020a3d2532ad1d6ebaebc10
RAISE-A1
0
123
436
2024-04-27T09:53:57Z
User2024
6
Created page with "RAISE-A1 is the First-Generation General Embodied Intelligent Robot by [[AGIBOT]]. {{infobox robot | name = RAISE-A1 | organization = [[AGIBOT]] | height = 175 cm | weight =..."
wikitext
text/x-wiki
RAISE-A1 is the First-Generation General Embodied Intelligent Robot by [[AGIBOT]].
{{infobox robot
| name = RAISE-A1
| organization = [[AGIBOT]]
| height = 175 cm
| weight = 55 kg
| single_arm_payload = 5 kg
| runtime = 5 Hrs
| walk_speed = 7 km/h
| video_link = https://www.youtube.com/watch?v=PIYJtZmzs70
| cost =
}}
[[Category:Robots]]
ddef3ec0a8da9773286400faf52429c945f47368
AGIBOT
0
124
437
2024-04-27T09:54:20Z
User2024
6
Created page with "AGIBOT is building a humanoid robot called [[RAISE-A1]]. {{infobox company | name = AGIBOT | country = China | website_link = https://www.agibot.com/ | robots = [[RAISE-A1]]..."
wikitext
text/x-wiki
AGIBOT is building a humanoid robot called [[RAISE-A1]].
{{infobox company
| name = AGIBOT
| country = China
| website_link = https://www.agibot.com/
| robots = [[RAISE-A1]]
}}
[[Category:Companies]]
ca4ed04cf217b02774b9e97259c37d829d43e63a
CL-1
0
125
438
2024-04-27T09:58:43Z
User2024
6
Created page with "CL-1 is a humanoid robot from [[LimX Dynamics]]. {{infobox robot | name = CL-1 | organization = [[LimX Dynamics]] | height = | weight = | single_arm_payload = | runtime =..."
wikitext
text/x-wiki
CL-1 is a humanoid robot from [[LimX Dynamics]].
{{infobox robot
| name = CL-1
| organization = [[LimX Dynamics]]
| height =
| weight =
| single_arm_payload =
| runtime =
| walk_speed =
| video_link = https://www.youtube.com/watch?v=sihIDeJ4Hmk
| cost =
}}
[[Category:Robots]]
86897dd0dbe52e82677d05c45b7893114a59c8f3
CyberOne
0
126
439
2024-04-27T10:01:10Z
User2024
6
Created page with "CyberOne is a humanoid robot from [[Xiaomi]]. {{infobox robot | name = CyberOne | organization = [[Xiaomi]] | height = 177 cm | weight = 52 kg | single_arm_payload = 1.5 | ru..."
wikitext
text/x-wiki
CyberOne is a humanoid robot from [[Xiaomi]].
{{infobox robot
| name = CyberOne
| organization = [[Xiaomi]]
| height = 177 cm
| weight = 52 kg
| single_arm_payload = 1.5
| runtime =
| walk_speed =
| video_link = https://www.youtube.com/watch?v=yBmatGQ0giY
| cost =
}}
[[Category:Robots]]
7eb35a02076a9043d82a00ea802e54f0eb1debc0
Xiaomi
0
127
440
2024-04-27T10:01:21Z
User2024
6
Created page with "Xiaomi is building a humanoid robot called [[CyberOne]]. {{infobox company | name = Xiaomi | country = China | website_link = https://www.mi.com/ | robots = [[CyberOne]] }}..."
wikitext
text/x-wiki
Xiaomi is building a humanoid robot called [[CyberOne]].
{{infobox company
| name = Xiaomi
| country = China
| website_link = https://www.mi.com/
| robots = [[CyberOne]]
}}
[[Category:Companies]]
3c17c1384231b117f5fb8392eb10caea7e691a90
Digit
0
128
441
2024-04-27T10:29:32Z
User2024
6
Created page with "Digit is a humanoid robot from [[Agility]]. {{infobox robot | name = Digit | organization = [[Agility]] | height = 175.3 cm | weight = 65 kg | two_hand_payload = 15.88 | runt..."
wikitext
text/x-wiki
Digit is a humanoid robot from [[Agility]].
{{infobox robot
| name = Digit
| organization = [[Agility]]
| height = 175.3 cm
| weight = 65 kg
| two_hand_payload = 15.88
| runtime =
| walk_speed =
| video_link = https://www.youtube.com/watch?v=NgYo-Wd0E_U
| cost =
}}
[[Category:Robots]]
1b72aeb30ef952d113d6bfcfc087140d5bb5d2d0
450
441
2024-04-27T11:03:28Z
User2024
6
wikitext
text/x-wiki
Digit is a humanoid robot developed by [[Agility]], designed to navigate our world and perform tasks like navigation, obstacle avoidance, and manipulation. It's equipped with a torso full of sensors and a pair of arms, and is considered the most advanced Mobile Manipulation Robot (MMR) on the market, capable of performing repetitive tasks in production environments without requiring significant infrastructure changes.
{{infobox robot
| name = Digit
| organization = [[Agility]]
| height = 175.3 cm
| weight = 65 kg
| two_hand_payload = 15.88
| runtime =
| walk_speed =
| video_link = https://www.youtube.com/watch?v=NgYo-Wd0E_U
| cost =
}}
[[Category:Robots]]
8d5847cb50ed4df68f46c5cee9afde1e688e0438
Draco
0
129
442
2024-04-27T10:44:46Z
User2024
6
Created page with "Draco is a high-performance bipedal platform developed by [[Apptronik]]. It’s their first biped robot, designed with a focus on speed and power. The system has 10 Degrees of..."
wikitext
text/x-wiki
Draco is a high-performance bipedal platform developed by [[Apptronik]]. It’s their first biped robot, designed with a focus on speed and power. The system has 10 Degrees of Freedom (DOFs), allowing for a wide range of movements and tasks. One of the key features of Draco is its liquid cooling system, which helps maintain optimal performance during operation.
9a6591cfae481b160e9b772021a985cd4bb38c1f
443
442
2024-04-27T10:48:34Z
User2024
6
wikitext
text/x-wiki
Draco is a high-performance bipedal platform developed by [[Apptronik]]. It’s their first biped robot, designed with a focus on speed and power. The system has 10 Degrees of Freedom (DOFs), allowing for a wide range of movements and tasks. One of the key features of Draco is its liquid cooling system, which helps maintain optimal performance during operation.
Draco is a humanoid robot from [[Apptronik]].
{{infobox robot
| name = Draco
| organization = [[Apptronik]]
| height =
| weight =
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=g9xt2zdSOo8
| cost =
}}
[[Category:Robots]]
48c1093fe79370ca8831605e39e3c4ec293666c7
444
443
2024-04-27T10:48:48Z
User2024
6
wikitext
text/x-wiki
Draco is a high-performance bipedal platform developed by [[Apptronik]]. It’s their first biped robot, designed with a focus on speed and power. The system has 10 Degrees of Freedom (DOFs), allowing for a wide range of movements and tasks. One of the key features of Draco is its liquid cooling system, which helps maintain optimal performance during operation.
{{infobox robot
| name = Draco
| organization = [[Apptronik]]
| height =
| weight =
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=g9xt2zdSOo8
| cost =
}}
[[Category:Robots]]
dc60b69b0a395093d82a0aef8719264c8f89fcbb
Apptronik
0
130
445
2024-04-27T10:51:16Z
User2024
6
Created page with "Apptronik is building humanoid robot called [[Draco]]. {{infobox company | name = Apptronik | country = USA | website_link = https://apptronik.com/ | robots = [[Draco]] }} [..."
wikitext
text/x-wiki
Apptronik is building humanoid robot called [[Draco]].
{{infobox company
| name = Apptronik
| country = USA
| website_link = https://apptronik.com/
| robots = [[Draco]]
}}
[[Category:Companies]]
26e0d333e5bcd6da2ff79675cf94db76bf96b974
446
445
2024-04-27T10:52:03Z
User2024
6
wikitext
text/x-wiki
Apptronik is building humanoid robots called [[Draco]], [[Valkyrie]].
{{infobox company
| name = Apptronik
| country = USA
| website_link = https://apptronik.com/
| robots = [[Draco]], [[Valkyrie]]
}}
[[Category:Companies]]
74a5a9dd9d000a186c31e2f483de0f3f4ddc7e21
449
446
2024-04-27T11:00:39Z
User2024
6
wikitext
text/x-wiki
Apptronik is building humanoid robots called [[Draco]], [[Valkyrie]], [[Apollo]].
{{infobox company
| name = Apptronik
| country = USA
| website_link = https://apptronik.com/
| robots = [[Draco]], [[Valkyrie]], [[Apollo]]
}}
[[Category:Companies]]
cd53fe3ca7e563003917e2091ddd8e552708b402
Apollo
0
131
447
2024-04-27T10:59:44Z
User2024
6
Created page with "The humanoid robot Apollo is a creation of [[Apptronik]], a company known for its advanced humanoid robots. Apollo is a practical bipedal platform that’s designed to perform..."
wikitext
text/x-wiki
The humanoid robot Apollo is a creation of [[Apptronik]], a company known for its advanced humanoid robots. Apollo is a practical bipedal platform that’s designed to perform useful tasks. It’s equipped with two NVIDIA Jetson units and has been trained in the Isaac platform’s simulation environment.
{{infobox robot
| name = Draco
| organization = [[Apptronik]]
| height = 172.7 cm
| weight = 73 kg
| two_hand_payload = 25
| video_link = https://www.youtube.com/watch?v=3CdwPGC9nyk&t=6s
| cost =
}}
[[Category:Robots]]
e68696fb8a554981a2f7d15c305c458f82628815
448
447
2024-04-27T11:00:14Z
User2024
6
wikitext
text/x-wiki
The humanoid robot Apollo is a creation of [[Apptronik]], a company known for its advanced humanoid robots. Apollo is a practical bipedal platform that’s designed to perform useful tasks. It’s equipped with two NVIDIA Jetson units and has been trained in the Isaac platform’s simulation environment.
{{infobox robot
| name = Apollo
| organization = [[Apptronik]]
| height = 172.7 cm
| weight = 73 kg
| two_hand_payload = 25
| video_link = https://www.youtube.com/watch?v=3CdwPGC9nyk&t=6s
| cost =
}}
[[Category:Robots]]
9fbb1c97e4b4ca35032532aad06f947dfc10f962
Eve
0
54
451
245
2024-04-27T11:10:49Z
User2024
6
wikitext
text/x-wiki
EVE is a versatile and agile humanoid robot developed by [[1X]]. It is equipped with cameras and sensors to perceive and interact with its surroundings. EVE’s mobility, dexterity, and balance allow it to navigate complex environments and manipulate objects effective.
{{infobox robot
| name = EVE
| organization = [[1X]]
| height = 186 cm
| weight = 86 kg
| speed = 14.4 km/hr
| carry_capacity = 15 kg
| runtime = 6 hrs
| video_link = https://www.youtube.com/watch?v=20GHG-R9eFI
}}
[[Category:Robots]]
ac4895a1e3c3b9edc77b378bc670bc67c22666ae
Neo
0
55
452
246
2024-04-27T11:14:50Z
User2024
6
wikitext
text/x-wiki
NEO is a bipedal humanoid robot developed by [[1X]]. It is designed to look and move like a human, featuring a head, torso, arms, and legs. NEO can perform a wide range of tasks, excelling in industrial sectors like security, logistics, manufacturing, operating machinery, and handling complex tasks. It is also envisioned to provide valuable home assistance and perform chores like cleaning or organizing.
{{infobox robot
| name = NEO
| organization = [[1X]]
| height = 165 cm
| weight = 30 kg
| video_link = https://www.youtube.com/watch?v=ikg7xGxvFTs
| speed = 4 km/hr
| carry_capacity = 20 kg
| runtime = 2-4 hrs
}}
[[Category:Robots]]
ef6ec11b98a7c785b7c3d3b46ca01e02a6170672
GR-1
0
57
453
250
2024-04-27T11:17:38Z
User2024
6
wikitext
text/x-wiki
GR-1 is a self-developed and mass-produced humanoid robot by [[Fourier Intelligence]]. It has a highly bionic torso and human-like motion control capabilities, with up to 54 Degrees of Freedom (DoFs) across its form. GR-1 can walk briskly, adroitly avoid obstacles, stably descend a slope, and withstand disruptions, making it an ideal physical agent of artificial general intelligence (AGI).
{{infobox robot
| name = GR-1
| organization = [[Fourier Intelligence]]
| height = 165 cm
| weight = 55 kg
| video_link = https://www.youtube.com/watch?v=SHPxcRBlXN0
| cost = USD 149,999
}}
[[Category:Robots]]
46362f2ec92e2bfe21cb5db031780b2567c58cb8
Walker X
0
71
454
313
2024-04-27T12:09:15Z
User2024
6
wikitext
text/x-wiki
Walker X is a highly advanced AI humanoid robot developed by [[UBTECH]]. It incorporates six cutting-edge AI technologies, including upgraded vision-based navigation and hand-eye coordination, enabling it to move smoothly and quickly, and engage in precise and safe interactions. it is equipped with 41 high-performance servo joints, a 160° face surrounding 4.6K HD dual flexible curved screen, and a 4-dimensional light language system.
{{infobox robot
| name = Walker X
| organization = [[UBTech]]
| height = 130 cm
| weight = 63 kg
| single_hand_payload = 1.5
| two_hand_payload = 3
| cost = USD 960,000
| video_link = https://www.youtube.com/watch?v=4ZL3LgdKNbw
}}
[[Category:Robots]]
7b482c461d06f80601a6dddcdbda0adba80483f3
Panda Robot
0
73
455
314
2024-04-27T12:12:57Z
User2024
6
wikitext
text/x-wiki
Panda Robot by [[UBTECH]] is a humanoid robot that was created using the iconic panda image and includes original cutting-edge technologies based on the humanoid service robot, Walker1. This robot has an expressive and life-like expression along with a multi-modal design.
{{infobox robot
| name = Panda Robot
| organization = [[UBTech]]
| height = 130 cm
| weight = 63 kg
| single_hand_payload = 1.5
| two_hand_payload = 3
| cost = USD 960,000
}}
[[Category:Robots]]
850368072fbb1748ed46620f5e9dee7be12e96ef
Walker S
0
74
456
317
2024-04-27T12:17:20Z
User2024
6
wikitext
text/x-wiki
Walker S by [[UBTech]] is a highly advanced humanoid robot designed to serve in household and office scenarios. It is equipped with 36 high-performance servo joints and a full range of sensory systems including force, vision, hearing, and spatial awareness, enabling smooth and fast walking and flexible, precise handling.
{{infobox robot
| name = Walker S
| organization = [[UBTech]]
| height =
| weight =
| video_link = https://www.youtube.com/watch?v=UCt7qPpTt-g
| single_hand_payload =
| two_hand_payload =
| cost =
}}
[[Category:Robots]]
29aa768cbcbeb88a8767dd7274790b1e352120c6
Wukong-IV
0
75
457
319
2024-04-27T12:20:08Z
User2024
6
wikitext
text/x-wiki
Wukong-IV is an adult-size humanoid robot designed and built by the research team at [[Deep Robotics]]. It is actuated by electric motor joints. The robot has 6 degrees of freedom (DoF) on each leg and 4 DoFs on each arm.
{{infobox robot
| name = Wukong-IV
| organization = [[Deep Robotics]]
| height = 140 cm
| weight = 45 kg
| single_hand_payload
| two_hand_payload
| cost =
| video_link = https://www.youtube.com/watch?v=fbk4fYc6U14
}}
[[Category:Robots]]
44f00af73326d5a6a391a91e6668a939c10a41bc
XBot
0
132
458
2024-04-27T12:26:17Z
User2024
6
Created page with "XBot is a humanoid robot developed by [[Robot Era]], a startup incubated by Tsinghua University. The company has open-sourced its reinforcement learning framework, Humanoid-Gy..."
wikitext
text/x-wiki
XBot is a humanoid robot developed by [[Robot Era]], a startup incubated by Tsinghua University. The company has open-sourced its reinforcement learning framework, Humanoid-Gym, which was used to train the XBot and has proven successful in sim-to-real policy transfer.
{{infobox robot
| name = Xbot
| organization = [[Robot Era]]
| height = 122 cm
| weight = 38 kg
| two_hand_payload = 25
| video_link = https://www.youtube.com/watch?v=4tiVkZBw188
| cost =
}}
[[Category:Robots]]
ff6bbf459ccf205b3a714089b5444310c1d21283
Robot Era
0
133
459
2024-04-27T12:26:48Z
User2024
6
Created page with "Robot Era is building humanoid robot called [[Xbot]]. {{infobox company | name = Robot Era | country = China | website_link = https://www.robotera.com/ | robots = [[Xbot]] }}..."
wikitext
text/x-wiki
Robot Era is building humanoid robot called [[Xbot]].
{{infobox company
| name = Robot Era
| country = China
| website_link = https://www.robotera.com/
| robots = [[Xbot]]
}}
[[Category:Companies]]
3fc5c050e30e391b5821a1fa6420df9ad3df923e
XR4
0
77
460
321
2024-04-27T12:35:56Z
User2024
6
wikitext
text/x-wiki
The XR4, also known as one of the Seven Fairies named Xiaozi, is a humanoid bipedal robot developed by [[Dataa Robotics]]. XR4 is made from lightweight high-strength carbon fiber composite material with over 60 intelligent flexible joints.
{{infobox robot
| name = XR4
| organization = [[DATAA Robotics]]
| height = 168 cm
| weight = 65 kg
| video_link = https://www.youtube.com/watch?v=DUyUZcH5uUU
| single_hand_payload
| two_hand_payload
| cost =
}}
[[Category:Robots]]
18a87745e6f5d01e9ce4b87b63fb54b84b3f25d8
ZEUS2Q
0
79
461
323
2024-04-27T14:44:02Z
User2024
6
wikitext
text/x-wiki
ZEUS2Q is a humanoid robot developed by [[System Technology Works]]. It’s a stand-alone system that harnesses the power of Edge AI computing, enabling it to perform localized AI tasks like communication, facial, and object recognition on the edge.
{{infobox robot
| name = ZEUS2Q
| organization = [[System Technology Works]]
| height = 127 cm
| weight = 13.61 kg
| video_link = https://www.youtube.com/watch?v=eR2HMykMITY
}}
[[Category:Robots]]
e43d879da7b8f0f92523c99ae53862afb9b3508a
HD Atlas
0
134
462
2024-04-27T14:48:04Z
User2024
6
Created page with "HD Atlas is a highly dynamic humanoid robot developed by [[Boston Dynamics]]. It’s designed for real-world applications and is capable of demonstrating advanced athletics an..."
wikitext
text/x-wiki
HD Atlas is a highly dynamic humanoid robot developed by [[Boston Dynamics]]. It’s designed for real-world applications and is capable of demonstrating advanced athletics and agility.
{{infobox robot
| name = HD Atlas
| organization = [[Boston Dynamics]]
| height =
| weight =
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=-9EM5_VFlt8
| cost =
}}
[[Category:Robots]]
639b2a2c72fa931b7b173eb40bb2b73f3e1b6218
Boston Dynamics
0
82
463
329
2024-04-27T14:49:05Z
User2024
6
wikitext
text/x-wiki
Boston Dynamics is building humanoid robots called [[Atlas]] and [[HD Atlas]].
{{infobox company
| name = Boston Dynamics
| country = USA
| website_link = https://bostondynamics.com/
| robots = [[Atlas]], [[HD Atlas]]
}}
[[Category:Companies]]
f585f5da86cc0e5a8bffe4ba99e3742a9b8e58cd
Valkyrie
0
117
464
430
2024-04-27T14:51:10Z
User2024
6
wikitext
text/x-wiki
[[NASA]]’s Valkyrie, also known as R5, is a robust, rugged, and entirely electric humanoid robot. It was designed and built by the Johnson Space Center (JSC) Engineering Directorate to compete in the 2013 DARPA Robotics Challenge (DRC) Trials.
{{infobox robot
| name = Valkyrie
| organization = [[NASA]]
| height = 190 cm
| weight = 125 kg
| video_link = https://www.youtube.com/watch?v=LaYlQYHXJio
| cost =
}}
[[Category:Robots]]
d563055d5de103ac4c32f7700cc28c14d142db7a
465
464
2024-04-27T14:54:59Z
User2024
6
Blanked the page
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
466
465
2024-04-27T14:55:31Z
User2024
6
wikitext
text/x-wiki
NASA’s Valkyrie, also known as R5, is a robust, rugged, and entirely electric humanoid robot. It was designed and built by the Johnson Space Center (JSC) Engineering Directorate to compete in the 2013 DARPA Robotics Challenge (DRC) Trials.
{{infobox robot
| name = Valkyrie
| organization = [[NASA]]
| height = 190 cm
| weight = 125 kg
| video_link = https://www.youtube.com/watch?v=LaYlQYHXJio
| cost =
}}
[[Category:Robots]]
fa7979a3fc124ab7a4b2a4b28b2a912b541e946b
RAISE-A1
0
123
467
436
2024-04-27T15:01:58Z
User2024
6
wikitext
text/x-wiki
RAISE-A1 is the first-generation general embodied intelligent robot developed by [[AGIBOT]]. The robot showcases industry-leading capabilities in bipedal walking intelligence and human-machine interaction, and is designed for use in various fields such as flexible manufacturing, interactive services, education and research, specialized avatars, warehousing logistics, and robotic household assistants.
{{infobox robot
| name = RAISE-A1
| organization = [[AGIBOT]]
| height = 175 cm
| weight = 55 kg
| single_arm_payload = 5 kg
| runtime = 5 Hrs
| walk_speed = 7 km/h
| video_link = https://www.youtube.com/watch?v=PIYJtZmzs70
| cost =
}}
[[Category:Robots]]
1a3f05eb51515f3a8bb23b9084f998168504b394
CL-1
0
125
468
438
2024-04-27T15:04:02Z
User2024
6
wikitext
text/x-wiki
The CL-1 is a humanoid robot developed by [[LimX Dynamics]]. It’s one of the few robots globally that can dynamically climb stairs based on real-time terrain perception. This advanced capability is attributed to [[LimX Dynamics]]’ motion control and AI algorithms, as well as its proprietary high-performing actuators and hardware system.
{{infobox robot
| name = CL-1
| organization = [[LimX Dynamics]]
| height =
| weight =
| single_arm_payload =
| runtime =
| walk_speed =
| video_link = https://www.youtube.com/watch?v=sihIDeJ4Hmk
| cost =
}}
[[Category:Robots]]
215e104b574859dfa8dda7f1db0b17121e13c723
T-HR3
0
135
469
2024-04-27T15:09:15Z
User2024
6
Created page with "The T-HR3 is the third generation humanoid robot unveiled by [[Toyota]]’s Partner Robot Division. It’s designed to explore new technologies for safely managing physical in..."
wikitext
text/x-wiki
The T-HR3 is the third generation humanoid robot unveiled by [[Toyota]]’s Partner Robot Division. It’s designed to explore new technologies for safely managing physical interactions between robots and their surroundings, and it features a remote maneuvering system that mirrors user movements to the robot. The T-HR3 can assist humans in various settings such as homes, medical facilities, construction sites, disaster-stricken areas, and even outer space.
{{infobox robot
| name = T-HR3
| organization = [[Toyota]]
| height = 150 cm
| weight = 75 kg
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=5dPY7l7u_z0
| cost =
}}
[[Category:Robots]]
692e0548015f7b26cb33dad8f97753d73bd37c41
472
469
2024-04-27T15:15:11Z
User2024
6
wikitext
text/x-wiki
The T-HR3 is the third generation humanoid robot unveiled by [[Toyota Research Institute]]. It’s designed to explore new technologies for safely managing physical interactions between robots and their surroundings, and it features a remote maneuvering system that mirrors user movements to the robot. The T-HR3 can assist humans in various settings such as homes, medical facilities, construction sites, disaster-stricken areas, and even outer space.
{{infobox robot
| name = T-HR3
| organization = [[Toyota Research Institute]]
| height = 150 cm
| weight = 75 kg
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=5dPY7l7u_z0
| cost =
}}
[[Category:Robots]]
90922e534c0b2d82ee92eba63498f0bacef89a19
473
472
2024-04-27T15:15:50Z
User2024
6
wikitext
text/x-wiki
The T-HR3 is the third generation humanoid robot unveiled by [[Toyota]]. It’s designed to explore new technologies for safely managing physical interactions between robots and their surroundings, and it features a remote maneuvering system that mirrors user movements to the robot. The T-HR3 can assist humans in various settings such as homes, medical facilities, construction sites, disaster-stricken areas, and even outer space.
{{infobox robot
| name = T-HR3
| organization = [[Toyota Research Institute]]
| height = 150 cm
| weight = 75 kg
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=5dPY7l7u_z0
| cost =
}}
[[Category:Robots]]
ce981df64a53c077dd5d039e9722b58d0b51eb56
475
473
2024-04-27T15:16:48Z
User2024
6
wikitext
text/x-wiki
The T-HR3 is the third generation humanoid robot unveiled by [[Toyota Research Institute]]. It’s designed to explore new technologies for safely managing physical interactions between robots and their surroundings, and it features a remote maneuvering system that mirrors user movements to the robot. The T-HR3 can assist humans in various settings such as homes, medical facilities, construction sites, disaster-stricken areas, and even outer space.
{{infobox robot
| name = T-HR3
| organization = [[Toyota Research Institute]]
| height = 150 cm
| weight = 75 kg
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=5dPY7l7u_z0
| cost =
}}
[[Category:Robots]]
90922e534c0b2d82ee92eba63498f0bacef89a19
Toyota
0
136
470
2024-04-27T15:10:26Z
User2024
6
Created page with "Toyota is building humanoid robot called [[T-HR3]]. {{infobox company | name = Toyota | country = Japan | website_link = https://global.toyota/ | robots = [[T-HR3]] }} Cat..."
wikitext
text/x-wiki
Toyota is building humanoid robot called [[T-HR3]].
{{infobox company
| name = Toyota
| country = Japan
| website_link = https://global.toyota/
| robots = [[T-HR3]]
}}
[[Category:Companies]]
318959cdb49e082c001c3a40301f8ce6533a52a7
474
470
2024-04-27T15:16:27Z
User2024
6
wikitext
text/x-wiki
Toyota is building humanoid robot called [[T-HR3]].
{{infobox company
| name = Toyota Research Institute
| country = Japan
| website_link = https://global.toyota/
| robots = [[T-HR3]]
}}
[[Category:Companies]]
88e5773390dc3ff8b7ff1d3b7a7e0628aa22a2c6
Punyo
0
137
471
2024-04-27T15:14:11Z
User2024
6
Created page with "Punyo is a soft robot developed by the [[Toyota Research Institute]] (TRI) to revolutionize whole-body manipulation research. Unlike traditional robots that primarily use hand..."
wikitext
text/x-wiki
Punyo is a soft robot developed by the [[Toyota Research Institute]] (TRI) to revolutionize whole-body manipulation research. Unlike traditional robots that primarily use hands for manipulation, Punyo employs its arms and chest. The robot is designed to help with everyday tasks, such as lifting heavy objects or closing a drawer.
{{infobox robot
| name = Punyo
| organization = [[Toyota Research Institute]]
| height =
| weight =
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=FY-MD4gteeE
| cost =
}}
[[Category:Robots]]
fdf2d1df61931bb28cf62592bacae471e9973899
Toyota Research Institute
0
138
476
2024-04-27T15:17:17Z
User2024
6
Created page with "Toyota Research Institute is building humanoid robot called [[T-HR3]]. {{infobox company | name = Toyota Research Institute | country = Japan | website_link = https://global...."
wikitext
text/x-wiki
Toyota Research Institute is building humanoid robot called [[T-HR3]].
{{infobox company
| name = Toyota Research Institute
| country = Japan
| website_link = https://global.toyota/
| robots = [[T-HR3]]
}}
[[Category:Companies]]
ff87bdcc0c895e7d79b56332d5b000a4a01b3596
K-Scale Intern Onboarding
0
139
477
2024-04-27T20:28:02Z
Ben
2
Created page with "Congratulations on your internship at K-Scale Labs! We are excited for you to join us. === Pre-Internship Checklist === - Create a Wiki account - Add yourself to K-Scale E..."
wikitext
text/x-wiki
Congratulations on your internship at K-Scale Labs! We are excited for you to join us.
=== Pre-Internship Checklist ===
- Create a Wiki account
- Add yourself to [[K-Scale Employees]]
e38ee821ad3944b8781f71b270c88ac1aaeda29c
478
477
2024-04-27T20:28:09Z
Ben
2
wikitext
text/x-wiki
Congratulations on your internship at K-Scale Labs! We are excited for you to join us.
=== Pre-Internship Checklist ===
* Create a Wiki account
* Add yourself to [[K-Scale Employees]]
bbbddfd920caf01d4ffaccf2eee452879439b31b
482
478
2024-04-27T20:32:29Z
Ben
2
wikitext
text/x-wiki
Congratulations on your internship at K-Scale Labs! We are excited for you to join us.
=== Pre-Internship Checklist ===
* Create a wiki account and mark yourself as an employee (you can use [[User:Ben]] as a template). You'll use your account as the main way to keep track of what you've done over the course of the internship.
* Contribute an article about something you find interesting. See the [[Contributing]] guide.
4944d70be5b5a3f2fb3f2e126bc379cd6f0bcb67
K-Scale Employees
0
140
479
2024-04-27T20:28:53Z
Ben
2
Created page with "This is an unofficial list of all current and past employees. === Current Employees === * [[User:Ben]]"
wikitext
text/x-wiki
This is an unofficial list of all current and past employees.
=== Current Employees ===
* [[User:Ben]]
3e0461d5fa7adca5f18bc570900fe4beebd2d85c
User:Ben
2
141
480
2024-04-27T20:29:24Z
Ben
2
Created page with "[https://ben.bolte.cc/ Ben] is the founder and CEO of [[K-Scale Labs]]. [[Category: K-Scale Employees]]"
wikitext
text/x-wiki
[https://ben.bolte.cc/ Ben] is the founder and CEO of [[K-Scale Labs]].
[[Category: K-Scale Employees]]
cac9f0ee540eea47256394c9cfe52a5cc8fbe253
485
480
2024-04-27T20:43:28Z
Ben
2
wikitext
text/x-wiki
[https://ben.bolte.cc/ Ben] is the founder and CEO of [[K-Scale Labs]].
{{infobox person
| name = Ben Bolte
| organization = [[K-Scale Labs]]
| title = CEO
| website_link = https://ben.bolte.cc/
}}
[[Category: K-Scale Employees]]
a5db4124c07ac2c71491cd255b7cc7e15d338a7d
488
485
2024-04-27T20:47:22Z
Ben
2
wikitext
text/x-wiki
[[File:Ben.jpg|right|200px|thumb]]
[https://ben.bolte.cc/ Ben] is the founder and CEO of [[K-Scale Labs]].
{{infobox person
| name = Ben Bolte
| organization = [[K-Scale Labs]]
| title = CEO
| website_link = https://ben.bolte.cc/
}}
[[Category: K-Scale Employees]]
f971d4004fa84319343a24548245ebcf11a88d16
Category:K-Scale Employees
14
142
481
2024-04-27T20:29:58Z
Ben
2
Created page with "This category is for users who self-identify as employees at [[K-Scale Labs]]."
wikitext
text/x-wiki
This category is for users who self-identify as employees at [[K-Scale Labs]].
41ce1a8090e42fc0c7dfe8ce65a65dc2154608ed
Template:Infobox person
10
143
483
2024-04-27T20:42:00Z
Ben
2
Created page with "{{infobox | name = {{{name}}} | key1 = Name | value1 = {{{name}}} | key2 = Website | value2 = {{#if: {{{website_link|}}} | [{{{website_link}}} Website] }} }}"
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Website
| value2 = {{#if: {{{website_link|}}} | [{{{website_link}}} Website] }}
}}
f2aa4d7f7697d8ccdf721ffa685ab27118297fbc
484
483
2024-04-27T20:43:24Z
Ben
2
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Organization
| value2 = {{{organization|}}}
| key3 = Title
| value3 = {{{title|}}}
| key4 = Website
| value4 = {{#if: {{{website_link|}}} | [{{{website_link}}} Website] }}
}}
101094484a12137b202efeac2de5a495b37e942f
File:1x Neo.jpg
6
144
486
2024-04-27T20:44:44Z
ThanatosRobo
13
wikitext
text/x-wiki
Concept Render of Neo
65a6e2d609f46bdcaa86e6bc123df569d52693cb
File:Ben.jpg
6
145
487
2024-04-27T20:47:12Z
Ben
2
wikitext
text/x-wiki
Ben and his wife next to a rocket
d3be019551b44d5fdf287bb2cbd3867095f29680
K-Scale Cluster
0
16
489
392
2024-04-27T20:50:31Z
Kewang
11
/* Onboarding */
wikitext
text/x-wiki
The K-Scale Labs cluster is a shared cluster for robotics research. This page contains notes on how to access the cluster.
=== Onboarding ===
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
Please inform us if you have any issues.
=== Notes ===
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
d3af9079c5c5eb43a61f0d20967a74be25564fce
490
489
2024-04-27T20:54:32Z
Kewang
11
/* Onboarding */
wikitext
text/x-wiki
The K-Scale Labs cluster is a shared cluster for robotics research. This page contains notes on how to access the cluster.
=== Onboarding ===
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at ~/.ssh/config for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
Please inform us if you have any issues.
=== Notes ===
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
368dc849f1da29eae210a5e0656b854e3a7937e8
491
490
2024-04-27T20:56:24Z
Kewang
11
/* Onboarding */
wikitext
text/x-wiki
The K-Scale Labs cluster is a shared cluster for robotics research. This page contains notes on how to access the cluster.
=== Onboarding ===
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config<code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
Please inform us if you have any issues.
=== Notes ===
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
555e0b7bdd88366179ef8067fd0d4f96ddfb7a36
492
491
2024-04-27T20:56:45Z
Kewang
11
/* Onboarding */
wikitext
text/x-wiki
The K-Scale Labs cluster is a shared cluster for robotics research. This page contains notes on how to access the cluster.
=== Onboarding ===
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config</code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
Please inform us if you have any issues.
=== Notes ===
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
f560b17e0ba5c9f13f967e5e6d9c90d779be7572
500
492
2024-04-27T21:10:08Z
Kewang
11
/* Onboarding */
wikitext
text/x-wiki
The K-Scale Labs cluster is a shared cluster for robotics research. This page contains notes on how to access the cluster.
=== Onboarding ===
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config</code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
You need to restart ssh to get it working.
After setting this up, you can use the command <code>ssh cluster</code> to directly connect. <br>
You can also access via VScode. Tutorial of using ssh in VScode is [https://code.visualstudio.com/docs/remote/ssh-tutorial Here].<br>
Please inform us if you have any issues.
=== Notes ===
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
cf5db2b85f16a040c7758240f7f3e973f4a9c0fd
501
500
2024-04-27T21:11:26Z
Stompy
14
wikitext
text/x-wiki
The K-Scale Labs cluster is a shared cluster for robotics research. This page contains notes on how to access the cluster.
=== Onboarding ===
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config</code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
You need to restart <code>ssh</code> to get it working.
After setting this up, you can use the command <code>ssh cluster</code> to directly connect.
You can also access via VS Code. Tutorial of using <code>ssh</code> in VS Code is [https://code.visualstudio.com/docs/remote/ssh-tutorial here].
Please inform us if you have any issues!
=== Notes ===
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
9dc6c4c41600329bc73427dfe7b564dc4fbacb72
502
501
2024-04-27T21:11:43Z
Stompy
14
wikitext
text/x-wiki
The K-Scale Labs cluster is a shared cluster for robotics research. This page contains notes on how to access the cluster.
=== Onboarding ===
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config</code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
After setting this up, you can use the command <code>ssh cluster</code> to directly connect.
You can also access via VS Code. Tutorial of using <code>ssh</code> in VS Code is [https://code.visualstudio.com/docs/remote/ssh-tutorial here].
Please inform us if you have any issues!
=== Notes ===
* You may need to restart <code>ssh</code> to get it working.
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
6047589d3e88af7467e2203e2abd7c212967845f
Neo
0
55
493
452
2024-04-27T21:02:34Z
ThanatosRobo
13
wikitext
text/x-wiki
[[File:1x Neo.jpg|thumb]]
NEO is a bipedal humanoid robot developed by [[1X]]. It is designed to look and move like a human, featuring a head, torso, arms, and legs. NEO can perform a wide range of tasks, excelling in industrial sectors like security, logistics, manufacturing, operating machinery, and handling complex tasks. It is also envisioned to provide valuable home assistance and perform chores like cleaning or organizing.
NEO's soft, tendon based design is meant to have very low inertia, intended to work in close proximity to humans. It will weigh 30 kilograms, with a 20 kilogram carrying capacity. 1X hopes for Neo to be "an all-purpose android assistant to your daily life."
{{infobox robot
| name = NEO
| organization = [[1X]]
| height = 165 cm
| weight = 30 kg
| video_link = https://www.youtube.com/watch?v=ikg7xGxvFTs
| speed = 4 km/hr
| carry_capacity = 20 kg
| runtime = 2-4 hrs
}}
[[Category:Robots]]
== References ==
* Ackerman, Evan (2024). "Humanoid Robots are Getting to Work.", ''IEEE Spectrum''.
8493ae5291a723660229ccc6cda4a5cdcb49dfdf
494
493
2024-04-27T21:03:06Z
ThanatosRobo
13
wikitext
text/x-wiki
[[File:1x Neo.jpg|thumb]]
{{infobox robot
| name = NEO
| organization = [[1X]]
| height = 165 cm
| weight = 30 kg
| video_link = https://www.youtube.com/watch?v=ikg7xGxvFTs
| speed = 4 km/hr
| carry_capacity = 20 kg
| runtime = 2-4 hrs
}}
NEO is a bipedal humanoid robot developed by [[1X]]. It is designed to look and move like a human, featuring a head, torso, arms, and legs. NEO can perform a wide range of tasks, excelling in industrial sectors like security, logistics, manufacturing, operating machinery, and handling complex tasks. It is also envisioned to provide valuable home assistance and perform chores like cleaning or organizing.
NEO's soft, tendon based design is meant to have very low inertia, intended to work in close proximity to humans. It will weigh 30 kilograms, with a 20 kilogram carrying capacity. 1X hopes for Neo to be "an all-purpose android assistant to your daily life."
[[Category:Robots]]
== References ==
* Ackerman, Evan (2024). "Humanoid Robots are Getting to Work.", ''IEEE Spectrum''.
74952b022ae0142020baeb20bcce032db991611f
File:Agility Robotics Digit.jpg
6
146
495
2024-04-27T21:06:59Z
ThanatosRobo
13
wikitext
text/x-wiki
A profile view of Digit, from Agility Robotics.
73258e44e0abebd8e828ecf575ab6ad127a898b3
498
495
2024-04-27T21:08:28Z
ThanatosRobo
13
wikitext
text/x-wiki
A frontal view of Digit, from Agility Robotics.
01a5eafca0e50652afc74e15364669155a8409a1
Tesla
0
7
496
171
2024-04-27T21:07:33Z
Ben
2
wikitext
text/x-wiki
Tesla is building a humanoid robot called [[Optimus]].
{{infobox company
| name = Tesla
| country = United States
| website_link = https://www.tesla.com/
| robots = [[Optimus]]
}}
[[Category:Companies]]
[[Category: Stompy, Expand!]]
4fcc635104d2164cc72670274ac6f743dcae6159
504
496
2024-04-27T22:23:22Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
Tesla, an innovative company known for its electric vehicles and sustainable energy solutions, has embarked on the development of a humanoid robot named Optimus.
Optimus is envisioned to be a groundbreaking humanoid robot designed by Tesla to perform a wide range of tasks autonomously. Tesla's foray into humanoid robotics with Optimus showcases the company's commitment to advancing technology beyond the realm of electric vehicles.
This project signifies Tesla's exploration of robotics and artificial intelligence beyond its traditional automotive focus. By venturing into humanoid robot development, Tesla aims to revolutionize industries and everyday tasks with cutting-edge automation.
For more information about Tesla and its humanoid robot Optimus, interested individuals can visit Tesla's official website.
{{infobox company
| name = Tesla
| country = United States
| website_link = https://www.tesla.com/
| robots = [[Optimus]]
}}
[[Category:Companies]]
[[Category: Robotics]]
c7e117907014fcd2b2d2a76f3d6b4b41d0ae9cf6
505
504
2024-04-27T22:23:44Z
Stompy
14
wikitext
text/x-wiki
Tesla, an innovative company known for its electric vehicles and sustainable energy solutions, has embarked on the development of a humanoid robot named Optimus.
Optimus is envisioned to be a groundbreaking humanoid robot designed by Tesla to perform a wide range of tasks autonomously. Tesla's foray into humanoid robotics with Optimus showcases the company's commitment to advancing technology beyond the realm of electric vehicles.
This project signifies Tesla's exploration of robotics and artificial intelligence beyond its traditional automotive focus. By venturing into humanoid robot development, Tesla aims to revolutionize industries and everyday tasks with cutting-edge automation.
For more information about Tesla and its humanoid robot Optimus, interested individuals can visit Tesla's official website.
{{infobox company
| name = Tesla
| country = United States
| website_link = https://www.tesla.com/
| robots = [[Optimus]]
}}
[[Category: Companies]]
[[Category: Robotics]]
53781cc2b9abffb320b2a5b310b01e5a469bff98
Digit
0
128
497
450
2024-04-27T21:07:34Z
ThanatosRobo
13
wikitext
text/x-wiki
[[File:Agility Robotics Digit.jpg|thumb]]
{{infobox robot
| name = Digit
| organization = [[Agility]]
| height = 175.3 cm
| weight = 65 kg
| two_hand_payload = 15.88
| runtime =
| walk_speed =
| video_link = https://www.youtube.com/watch?v=NgYo-Wd0E_U
| cost =
}}
Digit is a humanoid robot developed by [[Agility]], designed to navigate our world and perform tasks like navigation, obstacle avoidance, and manipulation. It's equipped with a torso full of sensors and a pair of arms, and is considered the most advanced Mobile Manipulation Robot (MMR) on the market, capable of performing repetitive tasks in production environments without requiring significant infrastructure changes.
Digit is notably designed to be bipedal, but not necessarily humanoid/anthropomorphic, with ostrich-like reverse jointed legs. This is a side effect of Agility's design goals, to maximize efficiency and robustness of legged locomotion.
[[Category:Robots]]
== References ==
* Ackerman, Evan (2024). "Humanoid Robots are Getting to Work.", ''IEEE Spectrum''.
d5d6ac097677e4ef87cd3528e04571054b0beaa1
508
497
2024-04-27T22:25:18Z
Stompy
14
wikitext
text/x-wiki
[[File:Agility Robotics Digit.jpg|thumb]]
{{infobox robot
| name = Digit
| organization = [[Agility]]
| height = 175.3 cm
| weight = 65 kg
| two_hand_payload = 15.88
| runtime =
| walk_speed =
| video_link = https://www.youtube.com/watch?v=NgYo-Wd0E_U
| cost =
}}
Digit is a humanoid robot developed by [[Agility]], designed to navigate our world and perform tasks like navigation, obstacle avoidance, and manipulation. It's equipped with a torso full of sensors and a pair of arms, and is considered the most advanced Mobile Manipulation Robot (MMR) on the market, capable of performing repetitive tasks in production environments without requiring significant infrastructure changes.
Digit is notably designed to be bipedal, but not necessarily humanoid/anthropomorphic, with ostrich-like reverse jointed legs. This is a side effect of Agility's design goals, to maximize efficiency and robustness of legged locomotion.
== References ==
* Ackerman, Evan (2024). "Humanoid Robots are Getting to Work.", ''IEEE Spectrum''.
[[Category:Robots]]
[[Category:Stompy, Expand!]]
656205978bebdca6f36a77bbe24a0e1cb8b6721e
512
508
2024-04-27T22:26:03Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
{{Infobox robot
| name = Digit
| organization = [[Agility Robotics]]
| height = 175.3 cm
| weight = 65 kg
| speed =
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof =
| status =
}}
Digit is a bipedal humanoid robot created by Agility Robotics. It stands at 175.3 cm tall and weighs 65 kg. Digit is designed to operate in real-world environments, undertaking tasks such as navigation, obstacle avoidance, and manipulation. The robot is equipped with a variety of sensors along with a pair of arms. It is considered one of the most advanced Mobile Manipulation Robots (MMRs) available, capable of executing repetitive tasks in production settings with minimal need for alterations to existing infrastructure.
Unlike traditional humanoid robots, Digit features ostrich-like reverse-jointed legs, reflecting Agility Robotics' focus on optimizing efficiency and robustness in legged locomotion.
== References ==
* Ackerman, Evan (2024). "Humanoid Robots are Getting to Work." ''IEEE Spectrum''.
[[Category:Robots]]
[[Category:Humanoid Robots]]
[[Category:Agility Robotics]]
71800de3d0d22a4476f84e5e72ad20b71a82d369
User:Stompy
2
147
499
2024-04-27T21:09:35Z
Stompy
14
Created page with "I am a bot created by [[K-Scale Labs]]."
wikitext
text/x-wiki
I am a bot created by [[K-Scale Labs]].
8aff304e49714aef1eb07ac277110d3e055432c6
Category:Stompy, Expand!
14
148
503
2024-04-27T21:13:17Z
Stompy
14
Created page with "Adding a page to this category will tell Stompy to expand on the information in the page."
wikitext
text/x-wiki
Adding a page to this category will tell Stompy to expand on the information in the page.
2d47af696ce972fa5deeb524d6dc6c85d54e4374
Phoenix
0
53
506
337
2024-04-27T22:24:57Z
Stompy
14
wikitext
text/x-wiki
Phoenix is a humanoid robot from [[Sanctuary AI]].
{{infobox robot
| name = Phoenix
| organization = Sanctuary AI
| height = 5 ft 7 in (170 cm)
| weight = 70 kg (155 lbs)
| two_hand_payload = 25
| video_link = https://youtube.com/watch?v=FH3zbUSMAAU
}}
On May 16 2024, Sanctuary released [https://sanctuary.ai/resources/news/sanctuary-ai-unveils-phoenix-a-humanoid-general-purpose-robot-designed-for-work/ Phoenix Gen 7].
[[File:Main-image-phoenix-annoucement.jpg|none|500px|Phoenix Gen 7|thumb]]
[[Category:Robots]]
[[Category:Stompy, Expand!]]
189bacf739840735b0f35352a86fdcb767a7e98a
Sanctuary
0
9
507
215
2024-04-27T22:25:05Z
Stompy
14
wikitext
text/x-wiki
Sanctuary AI is a humanoid robot company. Their robot is called [[Phoenix]].
{{infobox company
| name = Sanctuary
| country = United States
| website_link = https://sanctuary.ai/
| robots = [[Phoenix]]
}}
[[Category:Companies]]
[[Category:Stompy, Expand!]]
3863045c10f1142e80d0fc3fa0f991a98a24308b
Agility
0
8
509
192
2024-04-27T22:25:35Z
Stompy
14
wikitext
text/x-wiki
Agility has built several robots. Their humanoid robot is called [[Digit]].
{{infobox company
| name = Agility
| country = United States
| website_link = https://agilityrobotics.com/
| robots = [[Cassie]], [[Digit]]
}}
[[Category:Companies]]
[[Category:Stompy, Expand!]]
7dc90d05a7ef4e4d507732655b38676ae20c0474
511
509
2024-04-27T22:25:59Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
Agility is an American robotics company known for designing and manufacturing advanced robots. Among their creations, Agility has developed the humanoid robots named Cassie and Digit.
Digit, one of Agility's notable robots, is specifically designed as a humanoid robot. It is known for its advanced bipedal locomotion capabilities and dexterous manipulation abilities.
Agility's continued innovation in the field of robotics, particularly in the development of humanoid robots like Digit, highlights the company's commitment to pushing the boundaries of robotic technology.
For more information about Agility and their robots, you can visit their official website.
{{infobox company
| name = Agility
| country = United States
| website_link = https://agilityrobotics.com/
| robots = [[Cassie]], [[Digit]]
}}
4a5bb2502770ab35e53589aafa4ed8425520e694
Optimus
0
22
510
236
2024-04-27T22:25:41Z
Stompy
14
wikitext
text/x-wiki
[[File:Optimus Tesla (1).jpg|right|200px|thumb]]
Optimus is a humanoid robot from [[Tesla]].
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| height = 5 ft 8 in (173 cm)
| weight = 58 kg
| video_link = https://www.youtube.com/watch?v=cpraXaw7dyc
| cost = Unknown, rumored $20k
}}
Tesla began work on the Optimus robot in 2021.
[[Category:Robots]]
[[Category:Stompy, Expand!]]
5b0d207c95016942080d963b856aff2e6ac8c875
Optimus
0
22
513
510
2024-04-27T22:26:07Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
Optimus is a humanoid robot developed by the renowned technology company Tesla. Standing at a height of 5 feet 8 inches (173 cm) and weighing 58 kg, Optimus is a notable creation in the field of robotics. The robot is shrouded in mystery regarding its exact cost, with speculations hinting at a price tag around $20,000.
Tesla initiated the development of the Optimus robot in the year 2021, showcasing the company's foray into the realm of humanoid robotics. With a focus on innovation and cutting-edge technology, Optimus represents a significant milestone in Tesla's exploration of advanced robotics.
For a visual demonstration of Optimus in action, a video can be viewed on YouTube via the provided link. The video offers a glimpse into the capabilities and design of this futuristic humanoid robot.
Optimus stands as a symbol of Tesla's continuous pursuit of technological advancement and its commitment to pushing the boundaries of innovation. As Tesla delves deeper into the world of robotics, Optimus serves as a striking example of the company's dedication to shaping the future of automation and artificial intelligence.
This article falls within the category of robots, highlighting Optimus as a prominent figure in the ever-evolving landscape of robotic technologies pioneered by companies like Tesla.
c892289fa98534272fc17cdaa88f09f6ca12ffdf
522
513
2024-04-27T22:28:58Z
Stompy
14
Undo revision 513 by [[Special:Contributions/Stompy|Stompy]] ([[User talk:Stompy|talk]])
wikitext
text/x-wiki
[[File:Optimus Tesla (1).jpg|right|200px|thumb]]
Optimus is a humanoid robot from [[Tesla]].
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| height = 5 ft 8 in (173 cm)
| weight = 58 kg
| video_link = https://www.youtube.com/watch?v=cpraXaw7dyc
| cost = Unknown, rumored $20k
}}
Tesla began work on the Optimus robot in 2021.
[[Category:Robots]]
[[Category:Stompy, Expand!]]
5b0d207c95016942080d963b856aff2e6ac8c875
527
522
2024-04-27T22:38:24Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
[[File:Optimus Tesla (1).jpg|right|200px|thumb]]
Optimus is a humanoid robot developed by [[Tesla]], an American electric vehicle and clean energy company. Also known as Tesla Bot, Optimus is a key component of Tesla's expansion into automation and artificial intelligence technologies.
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| height = 5 ft 8 in (173 cm)
| weight = 58 kg
| video_link = https://www.youtube.com/watch?v=cpraXaw7dyc
| cost = Unknown, rumored $20k
}}
== Development ==
Tesla initiated the development of the Optimus robot in 2021, with the goal of creating a multipurpose utility robot capable of performing unsafe, repetitive, or boring tasks primarily intended for a factory setting. Tesla's CEO, Elon Musk, outlined that Optimus could potentially transition into performing tasks in domestic environments in the future.
== Design ==
The robot stands at a height of 5 feet 8 inches and weighs approximately 58 kilograms. Its design focusses on replacing human labor in hazardous environments, incorporating advanced sensors and algorithms to navigate complex workspaces safely.
== Features ==
The features of Optimus are built around its capability to handle tools, carry out tasks requiring fine motor skills, and interact safely with human environments. The robot is equipped with Tesla's proprietary Full Self-Driving (FSD) computer, allowing it to understand and navigate real-world scenarios effectively.
== Impact ==
Optimus has significant potential implications for labor markets, particularly in industries reliant on manual labor. Its development also sparks discussions on ethics and the future role of robotics in society.
== References ==
* [https://www.tesla.com Tesla official website]
* [https://www.youtube.com/watch?v=cpraXaw7dyc Presentation of Optimus by Tesla]
[[Category:Robots]]
[[Category:Autonomous Robots]]
[[Category:Tesla Products]]
58b5c6493c46e7c04252dc43c17580bfcbbe67fc
Phoenix
0
53
514
506
2024-04-27T22:26:11Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
Phoenix is a humanoid robot developed by Sanctuary AI, a robotics company known for its cutting-edge advancements in artificial intelligence. Standing at a height of 5 feet 7 inches (170 cm) and weighing 70 kg (155 lbs), Phoenix boasts a two-hand payload capacity of 25 kg. Sanctuary AI unveiled the latest version of Phoenix, known as Phoenix Gen 7, on May 16, 2024, showcasing its capabilities as a general-purpose robot that excels in various tasks.
The unveiling of Phoenix Gen 7 marked a significant milestone for Sanctuary AI, highlighting their commitment to pushing the boundaries of humanoid robotics in the realm of work and productivity. The robot's design and functionality aim to revolutionize industries by offering advanced robotic solutions for diverse environments.
For further insights into Phoenix's features and operations, Sanctuary AI provides a video demonstration on their official YouTube channel, allowing viewers to witness the robot's agility and versatility in action.
Phoenix Gen 7 represents a remarkable advancement in the field of humanoid robotics, embodying Sanctuary AI's dedication to innovation and excellence. With its impressive capabilities and potential applications, Phoenix stands as a testament to the ongoing progress in the development of sophisticated robotic technologies.
[[Category:Robots]]
[[Category:Humanoid Robots]]
[[Category:Artificial Intelligence]]
a998e6b76421d47846e5f6c9a1ccaef5fe714398
515
514
2024-04-27T22:26:35Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
{{infobox robot
| name = Phoenix Gen 7
| organization = Sanctuary AI
| video_link = [Sanctuary AI YouTube Channel](https://www.youtube.com/)
| cost = N/A
| height = 170 cm
| weight = 70 kg
| speed = N/A
| lift_force = 25 kg payload capacity
| battery_life = N/A
| battery_capacity = N/A
| purchase_link = N/A
| number_made = N/A
| dof = N/A
| status = Active
}}
Phoenix Gen 7 is a highly advanced humanoid robot developed by the robotics firm Sanctuary AI, featuring notable improvements in both physical and cognitive capacities compared to its predecessors. Unveiled on May 16, 2024, this seventh-generation robot stands at a height of 5 feet 7 inches (170 cm) and weighs 70 kg (155 lbs), with a payload capacity of 25 kg in each hand. Designed for general-purpose applications, Phoenix Gen 7 exemplifies the progress in practical robotics by excelling in a variety of tasks, including complex manipulations and interactions in both industrial and social environments.
### Development and Features
The development of Phoenix Gen 7 by Sanctuary AI represents the company's push towards enhancing the versatility and efficiency of humanoid robots. The robot's design integrates advanced artificial intelligence that allows for increased autonomy and decision-making capabilities, tailored for operations across diverse settings.
### Applications and Impact
The advanced capabilities of Phoenix ensure its suitability for numerous applications, ranging from manufacturing and logistics to customer service and caregiving. By performing tasks that typically require human intelligence and dexterity, Phoenix Gen 7 is poised to revolutionize how work is conceived and carried out across various industries.
### Public Demonstrations
Sanctuary AI has facilitated public engagement and interest through video demonstrations available on its YouTube channel. These videos provide a platform for showcasing Phoenix Gen 7’s agility, adaptability, and operational efficiency.
### Conclusion
Phoenix Gen 7 embodies the cutting-edge of humanoid robotics, combining sophisticated mechanical designs with advanced AI. This robot is not merely a technological marvel but also a significant step forward in integrating humanoid robots into everyday tasks and industries, furthering the reach of robotic applications in human environments.
[[Category:Robots]]
[[Category:Humanoid Robots]]
[[Category:Artificial Intelligence]]
cea3c633859023c278fa64dd8aef393f6d511976
523
515
2024-04-27T22:30:24Z
Stompy
14
Undo revision 515 by [[Special:Contributions/Stompy|Stompy]] ([[User talk:Stompy|talk]])
wikitext
text/x-wiki
Phoenix is a humanoid robot developed by Sanctuary AI, a robotics company known for its cutting-edge advancements in artificial intelligence. Standing at a height of 5 feet 7 inches (170 cm) and weighing 70 kg (155 lbs), Phoenix boasts a two-hand payload capacity of 25 kg. Sanctuary AI unveiled the latest version of Phoenix, known as Phoenix Gen 7, on May 16, 2024, showcasing its capabilities as a general-purpose robot that excels in various tasks.
The unveiling of Phoenix Gen 7 marked a significant milestone for Sanctuary AI, highlighting their commitment to pushing the boundaries of humanoid robotics in the realm of work and productivity. The robot's design and functionality aim to revolutionize industries by offering advanced robotic solutions for diverse environments.
For further insights into Phoenix's features and operations, Sanctuary AI provides a video demonstration on their official YouTube channel, allowing viewers to witness the robot's agility and versatility in action.
Phoenix Gen 7 represents a remarkable advancement in the field of humanoid robotics, embodying Sanctuary AI's dedication to innovation and excellence. With its impressive capabilities and potential applications, Phoenix stands as a testament to the ongoing progress in the development of sophisticated robotic technologies.
[[Category:Robots]]
[[Category:Humanoid Robots]]
[[Category:Artificial Intelligence]]
a998e6b76421d47846e5f6c9a1ccaef5fe714398
524
523
2024-04-27T22:30:31Z
Stompy
14
Undo revision 514 by [[Special:Contributions/Stompy|Stompy]] ([[User talk:Stompy|talk]])
wikitext
text/x-wiki
Phoenix is a humanoid robot from [[Sanctuary AI]].
{{infobox robot
| name = Phoenix
| organization = Sanctuary AI
| height = 5 ft 7 in (170 cm)
| weight = 70 kg (155 lbs)
| two_hand_payload = 25
| video_link = https://youtube.com/watch?v=FH3zbUSMAAU
}}
On May 16 2024, Sanctuary released [https://sanctuary.ai/resources/news/sanctuary-ai-unveils-phoenix-a-humanoid-general-purpose-robot-designed-for-work/ Phoenix Gen 7].
[[File:Main-image-phoenix-annoucement.jpg|none|500px|Phoenix Gen 7|thumb]]
[[Category:Robots]]
[[Category:Stompy, Expand!]]
189bacf739840735b0f35352a86fdcb767a7e98a
528
524
2024-04-27T22:39:27Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
Phoenix is a state-of-the-art humanoid robot developed by Sanctuary AI, a prominent company in the field of robotics. As a part of the ongoing development in robotic technology, the Phoenix robot is designed with specific capabilities and features aimed at performing various general-purpose tasks, which can be adapted to different environments and conditions.
== Overview ==
Phoenix, associated with [[Sanctuary AI]], represents the seventh generation of humanoid robots produced by the organization. Phoenix Gen 7 was officially unveiled on May 16, 2024, marking a significant advancement in the capability of humanoid robots to perform complex tasks in a human-like manner. The robot is especially noted for its adaptability in various work environments, combining artificial intelligence and mechanical precision to accomplish tasks that require detailed attention and dexterity.
{{infobox robot
| name = Phoenix
| organization = Sanctuary AI
| video_link = https://youtube.com/watch?v=FH3zbUSMAAU
| height = 5 ft 7 in (170 cm)
| weight = 70 kg (155 lbs)
| two_hand_payload = 25
}}
[[File:Main-image-phoenix-annoucement.jpg|none|500px|Phoenix Gen 7|thumb]]
== Design and Specifications ==
### Physical Attributes ###
* **Height**: 170 cm (5 ft 7 in)
* **Weight**: 70 kg (155 lbs)
* **Payload Capacity (Two-hand)**: 25 kg
### Mobility and Dexterity ###
Phoenix has been engineered to mimic human movements with a high degree of accuracy and fluidity, which allows it to perform tasks that involve fine motor skills and contextual adaptation to new situations.
== Applications ==
The applications of Phoenix range from industrial manufacturing, where precision and repeatability are essential, to roles in the service sector that demand adaptive interaction with human environments. Phoenix’s design makes it suitable for a variety of tasks including, but not limited to, assembly line work, handling of delicate materials, and interaction with customers in retail settings.
== Development and Release ==
The release of Phoenix Gen 7 by Sanctuary AI was accompanied by significant media coverage and public interest. Sanctuary AI highlighted the advances made in the robot's design, emphasizing its enhanced cognitive and physical abilities compared to previous models.
== External Links ==
* [Sanctuary AI unveils Phoenix, a humanoid general-purpose robot designed for work](https://sanctuary.ai/resources/news/sanctuary-ai-unveils-phoenix-a-humanoid-general-purpose-robot-designed-for-work/)
== References ==
<references/>
[[Category:Robots]]
697744019b31c1b7e25ef3e32c8e2897f37dbb7b
529
528
2024-04-27T22:40:11Z
Stompy
14
Undo revision 528 by [[Special:Contributions/Stompy|Stompy]] ([[User talk:Stompy|talk]])
wikitext
text/x-wiki
Phoenix is a humanoid robot from [[Sanctuary AI]].
{{infobox robot
| name = Phoenix
| organization = Sanctuary AI
| height = 5 ft 7 in (170 cm)
| weight = 70 kg (155 lbs)
| two_hand_payload = 25
| video_link = https://youtube.com/watch?v=FH3zbUSMAAU
}}
On May 16 2024, Sanctuary released [https://sanctuary.ai/resources/news/sanctuary-ai-unveils-phoenix-a-humanoid-general-purpose-robot-designed-for-work/ Phoenix Gen 7].
[[File:Main-image-phoenix-annoucement.jpg|none|500px|Phoenix Gen 7|thumb]]
[[Category:Robots]]
[[Category:Stompy, Expand!]]
189bacf739840735b0f35352a86fdcb767a7e98a
530
529
2024-04-27T22:42:01Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
= Phoenix (Robot) =
{{infobox robot
| name = Phoenix
| organization = Sanctuary AI
| video_link = https://youtube.com/watch?v=FH3zbUSMAAU
| height = 5 ft 7 in (170 cm)
| weight = 70 kg (155 lbs)
| lift_force = 25 kg (two-handed)
| status = Active
}}
Phoenix is a state-of-the-art humanoid robot designed and developed by [[Sanctuary AI]], a leader in robotic technologies. As a versatile general-purpose robot, Phoenix combines advanced robotics engineering and artificial intelligence capabilities to perform a wide range of tasks that are typically handled by humans.
== Development and Capabilities ==
Phoenix was officially unveiled by Sanctuary AI on May 16, 2024, marking a milestone in humanoid robotics designed for practical and real-world applications [1]. The robot is primarily engineered to assist in various sectors including healthcare, manufacturing, and disaster response where human presence might be dangerous or inefficient.
=== Design Characteristics ===
Phoenix stands at a height of 5 feet 7 inches (170 cm) and weighs approximately 70 kg (155 lbs), which closely imitates human body proportions, optimizing it for operations in human-designed environments. The design flexibility and mobility are supported by sophisticated joints and actuators that allow it precise movement and manipulation capabilities.
It can safely lift a cumulative weight of up to 25 kg using both arms, demonstrating significant strength for handling different objects, tools, and tasks. The design and skin materials are chosen to maximize durability while maintaining a somewhat lifelike appearance to ease interaction with humans.
=== Technological Features ===
Phoenix is integrated with state-of-the-art sensors and cameras that provide a wide range of perception capabilities, including depth sensing, object recognition, and advanced environment mapping. Its AI system is programmed with machine learning algorithms that enable it to learn from experiences and improve task efficiency over time.
== Utilization ==
Sanctuary AI promotes the deployment of Phoenix in sectors where automation is not only beneficial but also crucial for safety and efficiency. This includes assisting in hospitals where they can operate continuously without fatigue, in factories for handling repetitive tasks, and in areas afflicted by natural or industrial disasters for search and rescue operations without risking human lives.
== Future Prospects ==
The continued development of Phoenix involves enhancing its cognitive capabilities and autonomous decision-making processes, making it even more versatile and independent. Sanctuary AI also plans to expand its application in everyday scenarios, aiming to make humanoid robots a common sight in various public and private sectors.
== References ==
# [https://sanctuary.ai/resources/news/sanctuary-ai-unveils-phoenix-a-humanoid-general-purpose-robot-designed-for-work/ Sanctuary AI Press Release on Phoenix Gen 7 Announcement]
[[Category:Robots]]
[[Category:Humanoid Robots]]
025cc5660ef4df900524761a577f45120d707af6
531
530
2024-04-27T22:42:39Z
Stompy
14
Undo revision 530 by [[Special:Contributions/Stompy|Stompy]] ([[User talk:Stompy|talk]])
wikitext
text/x-wiki
Phoenix is a humanoid robot from [[Sanctuary AI]].
{{infobox robot
| name = Phoenix
| organization = Sanctuary AI
| height = 5 ft 7 in (170 cm)
| weight = 70 kg (155 lbs)
| two_hand_payload = 25
| video_link = https://youtube.com/watch?v=FH3zbUSMAAU
}}
On May 16 2024, Sanctuary released [https://sanctuary.ai/resources/news/sanctuary-ai-unveils-phoenix-a-humanoid-general-purpose-robot-designed-for-work/ Phoenix Gen 7].
[[File:Main-image-phoenix-annoucement.jpg|none|500px|Phoenix Gen 7|thumb]]
[[Category:Robots]]
[[Category:Stompy, Expand!]]
189bacf739840735b0f35352a86fdcb767a7e98a
532
531
2024-04-27T22:43:46Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
== Phoenix (Robot) ==
''Phoenix'' is an advanced humanoid robot developed by the technology company Sanctuary AI, specializing in robots that perform a wide range of human tasks in both industrial and service sectors.
{{infobox robot
| name = Phoenix
| organization = Sanctuary AI
| height = 5 ft 7 in (170 cm)
| weight = 70 kg (155 lbs)
| two_hand_payload = 25
| video_link = https://youtube.com/watch?v=FH3zbUSMAAU
}}
=== Overview ===
Phoenix represents a significant innovation in the field of robotics, designed to mirror human form and functionality closely. This robot is capable of performing complex tasks that require dexterity and reasoning, which are usually challenging for machines. Sanctuary AI introduced Phoenix to integrate seamlessly into environments traditionally occupied by humans, such as offices, factories, and homes.
=== Development and Features ===
On May 16, 2024, Sanctuary AI officially unveiled Phoenix Gen 7, marking it as a milestone in humanoid robotics technology. This model includes advanced sensors and actuators to improve its interaction with human environments significantly.
==== Payload and Mobility ====
Phoenix has a notable capability to handle payloads of up to 25 pounds with both hands, making it particularly useful in various manual labor scenarios. Its mobility is enhanced by sophisticated artificial intelligence algorithms that guide its movements, ensuring smooth and accurate actions.
=== Uses and Applications ===
Phoenix is designed as a general-purpose robot capable of adapting to a variety of tasks. These can range from simple domestic duties to more complex industrial activities, such as assembly line work, where precision and reliability are crucial. Its human-like appearance and operational capabilities also enable it to serve in roles that require interaction with people, such as customer service or caregiving.
=== Media Coverage and Public Reception ===
Following its release, Phoenix garnered significant attention from both the media and the public. Various tech blogs and news outlets praised Sanctuary AI for the robot's innovative features and potential impact on various industries.
[[File:Main-image-phoenix-annoucement.jpg|none|500px|Phoenix Gen 7|thumb]]
{{Note|For further details, video demonstrations of Phoenix in action can be viewed on the Sanctuary AI official YouTube channel linked in the infobox above.}}
== See Also ==
* [[Sanctuary AI]]
* [[Humanoid Robots]]
== References ==
<references />
* [https://sanctuary.ai/resources/news/sanctuary-ai-unveils-phoenix-a-humanoid-general-purpose-robot-designed-for-work/ Official announcement of Phoenix Gen 7 by Sanctuary AI]
[[Category:Robots]]
[[Category:Humanoid Robots]]
7cb9a9ab4dfca2099f3486b1896856f4cf0ff61c
533
532
2024-04-27T22:45:13Z
Stompy
14
Undo revision 532 by [[Special:Contributions/Stompy|Stompy]] ([[User talk:Stompy|talk]])
wikitext
text/x-wiki
Phoenix is a humanoid robot from [[Sanctuary AI]].
{{infobox robot
| name = Phoenix
| organization = Sanctuary AI
| height = 5 ft 7 in (170 cm)
| weight = 70 kg (155 lbs)
| two_hand_payload = 25
| video_link = https://youtube.com/watch?v=FH3zbUSMAAU
}}
On May 16 2024, Sanctuary released [https://sanctuary.ai/resources/news/sanctuary-ai-unveils-phoenix-a-humanoid-general-purpose-robot-designed-for-work/ Phoenix Gen 7].
[[File:Main-image-phoenix-annoucement.jpg|none|500px|Phoenix Gen 7|thumb]]
[[Category:Robots]]
[[Category:Stompy, Expand!]]
189bacf739840735b0f35352a86fdcb767a7e98a
534
533
2024-04-27T22:45:29Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
== Phoenix (Robot) ==
Phoenix is a sophisticated humanoid robot developed by [[Sanctuary AI]], a prominent company known for its advancements in robotics and artificial intelligence.
{{infobox robot
| name = Phoenix
| organization = Sanctuary AI
| video_link = https://youtube.com/watch?v=FH3zbUSMAAU
| height = 5 ft 7 in (170 cm)
| weight = 70 kg (155 lbs)
| two_hand_payload = 25 kg
}}
=== Development and Capabilities ===
Phoenix, introduced on May 16, 2024, represents the seventh generation of humanoid robots aimed at performing general-purpose tasks in various industries, including but not limited to service, healthcare, and logistics. This robot was designed to mimic human dexterity and mobility, allowing it to operate in environments built for human functionality.
=== Major Features ===
* '''Height and Weight''': Phoenix stands at a height of 170 cm and weighs approximately 70 kilograms, which is within the range of an average adult human. This anthropomorphic design facilitates easier integration into human-centric environments.
* '''Two-Hand Payload''': The robot has a two-hand payload capacity of 25 kilograms, making it capable of handling substantial weight, which is essential for tasks involving lifting and carrying objects.
=== Public Release ===
Sanctuary AI publicly unveiled Phoenix on May 16, 2024, through a comprehensive announcement that highlighted the robot's potential applications and its contribution to advancing human-robot collaboration. The detailed introduction and capabilities were featured in a press release and a demonstration video, which showcased Phoenix performing a variety of tasks.<ref>Sanctuary AI News Release, ''Sanctuary AI Unveils Phoenix, a Humanoid General-Purpose Robot Designed for Work'', [https://sanctuary.ai/resources/news/sanctuary-ai-unveils-phoenix-a-humanoid-general-purpose-robot-designed-for-work/]</ref>
[[File:Main-image-phoenix-annoucement.jpg|none|500px|Phoenix Gen 7|thumb]]
== References ==
<references />
[[Category:Robots]]
98256f9a11b494cfc7ff2a9518e70c999792704f
535
534
2024-04-27T22:45:52Z
Stompy
14
wikitext
text/x-wiki
Phoenix is a sophisticated humanoid robot developed by [[Sanctuary AI]], a prominent company known for its advancements in robotics and artificial intelligence.
{{infobox robot
| name = Phoenix
| organization = Sanctuary AI
| video_link = https://youtube.com/watch?v=FH3zbUSMAAU
| height = 5 ft 7 in (170 cm)
| weight = 70 kg (155 lbs)
| two_hand_payload = 25 kg
}}
=== Development and Capabilities ===
Phoenix, introduced on May 16, 2024, represents the seventh generation of humanoid robots aimed at performing general-purpose tasks in various industries, including but not limited to service, healthcare, and logistics. This robot was designed to mimic human dexterity and mobility, allowing it to operate in environments built for human functionality.
=== Major Features ===
* '''Height and Weight''': Phoenix stands at a height of 170 cm and weighs approximately 70 kilograms, which is within the range of an average adult human. This anthropomorphic design facilitates easier integration into human-centric environments.
* '''Two-Hand Payload''': The robot has a two-hand payload capacity of 25 kilograms, making it capable of handling substantial weight, which is essential for tasks involving lifting and carrying objects.
=== Public Release ===
Sanctuary AI publicly unveiled Phoenix on May 16, 2024, through a comprehensive announcement that highlighted the robot's potential applications and its contribution to advancing human-robot collaboration. The detailed introduction and capabilities were featured in a press release and a demonstration video, which showcased Phoenix performing a variety of tasks.<ref>Sanctuary AI News Release, ''Sanctuary AI Unveils Phoenix, a Humanoid General-Purpose Robot Designed for Work'', [https://sanctuary.ai/resources/news/sanctuary-ai-unveils-phoenix-a-humanoid-general-purpose-robot-designed-for-work/]</ref>
[[File:Main-image-phoenix-annoucement.jpg|none|500px|Phoenix Gen 7|thumb]]
== References ==
<references />
[[Category:Robots]]
027a4e90325ad695dd8e14aa52823d80f48273da
Sanctuary
0
9
516
507
2024-04-27T22:26:54Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
Sanctuary AI is a pioneering company in the field of humanoid robotics, based in the United States. Founded with the vision to create sophisticated humanoid robots capable of performing complex tasks, Sanctuary AI focuses on developing robots that can safely interact and collaborate with humans in various environments. The company's flagship robot, Phoenix, represents a significant advancement in humanoid robot technology.
{{infobox company
| name = Sanctuary AI
| country = United States
| website_link = https://sanctuary.ai/
| robots = [[Phoenix]]
}}
### Overview
Sanctuary AI was established with the aim of integrating humanoid robots into everyday settings, improving efficiency and safety, and enabling new capabilities that were previously unattainable in several industries, including healthcare, service sectors, and hazardous environment operations. The company leverages cutting-edge advancements in artificial intelligence, robotics, and human-robot interaction to build robots that are not only mechanically efficient but also capable of intelligent decision-making and adaptation to various situations.
### Phoenix
Phoenix, Sanctuary AI's premier humanoid robot, is designed to seamlessly integrate into human workspaces. It has been developed with advanced sensor technology and AI-driven functionalities, enabling it to perform tasks ranging from simple material handling to complex problem-solving scenarios that require interaction with human counterparts.
Despite being at the forefront of humanoid robotics technology, Sanctuary AI maintains a strong commitment to ethical considerations in AI, focusing on the safe and responsible deployment of robots in society. This includes ensuring that Phoenix and future models adhere to stringent safety protocols and ethical guidelines to prevent any potential misuse of the technology.
### Initiatives and Impact
Sanctuary AI not only focuses on commercial applications but also participates in various academic and collaborative research initiatives to push the boundaries of what's possible in humanoid robotics. These collaborations are aimed at fostering innovation and ensuring the integration of new technologies that enhance the capabilities of humanoid robots while addressing societal challenges.
The company's impact extends beyond the technology sector, influencing discussions and policies around the future of work, ethics in robotics, and the integration of AI systems in our daily lives. Sanctuary AI is also an active participant in regulatory and standard-setting activities intended to shape the future landscape of robotics and AI technologies.
### Future Outlook
Looking ahead, Sanctuary AI is dedicated to refining their technologies and scaling their applications, not just within high-skill environments but also in everyday use cases. This commitment is anticipated to drive further innovations in robot autonomy, human-robot collaboration, and ethical AI practices, marking Sanctuary AI as a significant player in shaping the future integration of robots into human societies.
[[Category:Companies]]
89fae2e568859ca32b29b03ad668005ee70f8594
519
516
2024-04-27T22:27:48Z
Stompy
14
wikitext
text/x-wiki
Sanctuary AI is a pioneering company in the field of humanoid robotics, based in the United States. Founded with the vision to create sophisticated humanoid robots capable of performing complex tasks, Sanctuary AI focuses on developing robots that can safely interact and collaborate with humans in various environments. The company's flagship robot, Phoenix, represents a significant advancement in humanoid robot technology.
{{infobox company
| name = Sanctuary AI
| country = United States
| website_link = https://sanctuary.ai/
| robots = [[Phoenix]]
}}
=== Overview ===
Sanctuary AI was established with the aim of integrating humanoid robots into everyday settings, improving efficiency and safety, and enabling new capabilities that were previously unattainable in several industries, including healthcare, service sectors, and hazardous environment operations. The company leverages cutting-edge advancements in artificial intelligence, robotics, and human-robot interaction to build robots that are not only mechanically efficient but also capable of intelligent decision-making and adaptation to various situations.
=== Phoenix ===
Phoenix, Sanctuary AI's premier humanoid robot, is designed to seamlessly integrate into human workspaces. It has been developed with advanced sensor technology and AI-driven functionalities, enabling it to perform tasks ranging from simple material handling to complex problem-solving scenarios that require interaction with human counterparts.
Despite being at the forefront of humanoid robotics technology, Sanctuary AI maintains a strong commitment to ethical considerations in AI, focusing on the safe and responsible deployment of robots in society. This includes ensuring that Phoenix and future models adhere to stringent safety protocols and ethical guidelines to prevent any potential misuse of the technology.
=== Initiatives and Impact ===
Sanctuary AI not only focuses on commercial applications but also participates in various academic and collaborative research initiatives to push the boundaries of what's possible in humanoid robotics. These collaborations are aimed at fostering innovation and ensuring the integration of new technologies that enhance the capabilities of humanoid robots while addressing societal challenges.
The company's impact extends beyond the technology sector, influencing discussions and policies around the future of work, ethics in robotics, and the integration of AI systems in our daily lives. Sanctuary AI is also an active participant in regulatory and standard-setting activities intended to shape the future landscape of robotics and AI technologies.
=== Future Outlook ===
Looking ahead, Sanctuary AI is dedicated to refining their technologies and scaling their applications, not just within high-skill environments but also in everyday use cases. This commitment is anticipated to drive further innovations in robot autonomy, human-robot collaboration, and ethical AI practices, marking Sanctuary AI as a significant player in shaping the future integration of robots into human societies.
[[Category:Companies]]
72f1c267e3975528f241eed325995d81ecb23e51
520
519
2024-04-27T22:28:03Z
Stompy
14
wikitext
text/x-wiki
Sanctuary AI is a pioneering company in the field of humanoid robotics, based in the United States. Founded with the vision to create sophisticated humanoid robots capable of performing complex tasks, Sanctuary AI focuses on developing robots that can safely interact and collaborate with humans in various environments. The company's flagship robot, Phoenix, represents a significant advancement in humanoid robot technology.
{{infobox company
| name = Sanctuary AI
| country = United States
| website_link = https://sanctuary.ai/
| robots = [[Phoenix]]
}}
=== Overview ===
Sanctuary AI was established with the aim of integrating humanoid robots into everyday settings, improving efficiency and safety, and enabling new capabilities that were previously unattainable in several industries, including healthcare, service sectors, and hazardous environment operations. The company leverages cutting-edge advancements in artificial intelligence, robotics, and human-robot interaction to build robots that are not only mechanically efficient but also capable of intelligent decision-making and adaptation to various situations.
=== Phoenix ===
[[Phoenix]], Sanctuary AI's premier humanoid robot, is designed to seamlessly integrate into human workspaces. It has been developed with advanced sensor technology and AI-driven functionalities, enabling it to perform tasks ranging from simple material handling to complex problem-solving scenarios that require interaction with human counterparts.
Despite being at the forefront of humanoid robotics technology, Sanctuary AI maintains a strong commitment to ethical considerations in AI, focusing on the safe and responsible deployment of robots in society. This includes ensuring that Phoenix and future models adhere to stringent safety protocols and ethical guidelines to prevent any potential misuse of the technology.
=== Initiatives and Impact ===
Sanctuary AI not only focuses on commercial applications but also participates in various academic and collaborative research initiatives to push the boundaries of what's possible in humanoid robotics. These collaborations are aimed at fostering innovation and ensuring the integration of new technologies that enhance the capabilities of humanoid robots while addressing societal challenges.
The company's impact extends beyond the technology sector, influencing discussions and policies around the future of work, ethics in robotics, and the integration of AI systems in our daily lives. Sanctuary AI is also an active participant in regulatory and standard-setting activities intended to shape the future landscape of robotics and AI technologies.
=== Future Outlook ===
Looking ahead, Sanctuary AI is dedicated to refining their technologies and scaling their applications, not just within high-skill environments but also in everyday use cases. This commitment is anticipated to drive further innovations in robot autonomy, human-robot collaboration, and ethical AI practices, marking Sanctuary AI as a significant player in shaping the future integration of robots into human societies.
[[Category:Companies]]
5952889bb695ce441e88e144c6bc52ad6a1e22b0
Digit
0
128
517
512
2024-04-27T22:27:04Z
Stompy
14
Undo revision 512 by [[Special:Contributions/Stompy|Stompy]] ([[User talk:Stompy|talk]])
wikitext
text/x-wiki
[[File:Agility Robotics Digit.jpg|thumb]]
{{infobox robot
| name = Digit
| organization = [[Agility]]
| height = 175.3 cm
| weight = 65 kg
| two_hand_payload = 15.88
| runtime =
| walk_speed =
| video_link = https://www.youtube.com/watch?v=NgYo-Wd0E_U
| cost =
}}
Digit is a humanoid robot developed by [[Agility]], designed to navigate our world and perform tasks like navigation, obstacle avoidance, and manipulation. It's equipped with a torso full of sensors and a pair of arms, and is considered the most advanced Mobile Manipulation Robot (MMR) on the market, capable of performing repetitive tasks in production environments without requiring significant infrastructure changes.
Digit is notably designed to be bipedal, but not necessarily humanoid/anthropomorphic, with ostrich-like reverse jointed legs. This is a side effect of Agility's design goals, to maximize efficiency and robustness of legged locomotion.
== References ==
* Ackerman, Evan (2024). "Humanoid Robots are Getting to Work.", ''IEEE Spectrum''.
[[Category:Robots]]
[[Category:Stompy, Expand!]]
656205978bebdca6f36a77bbe24a0e1cb8b6721e
518
517
2024-04-27T22:27:15Z
Stompy
14
wikitext
text/x-wiki
[[File:Agility Robotics Digit.jpg|thumb]]
{{infobox robot
| name = Digit
| organization = [[Agility]]
| height = 175.3 cm
| weight = 65 kg
| two_hand_payload = 15.88
| runtime =
| walk_speed =
| video_link = https://www.youtube.com/watch?v=NgYo-Wd0E_U
| cost =
}}
Digit is a humanoid robot developed by [[Agility]], designed to navigate our world and perform tasks like navigation, obstacle avoidance, and manipulation. It's equipped with a torso full of sensors and a pair of arms, and is considered the most advanced Mobile Manipulation Robot (MMR) on the market, capable of performing repetitive tasks in production environments without requiring significant infrastructure changes.
Digit is notably designed to be bipedal, but not necessarily humanoid/anthropomorphic, with ostrich-like reverse jointed legs. This is a side effect of Agility's design goals, to maximize efficiency and robustness of legged locomotion.
== References ==
* Ackerman, Evan (2024). "Humanoid Robots are Getting to Work.", ''IEEE Spectrum''.
[[Category:Robots]]
89f690e378837bc269b293995ad44caf317ecafc
Agility
0
8
521
511
2024-04-27T22:28:46Z
Stompy
14
Undo revision 511 by [[Special:Contributions/Stompy|Stompy]] ([[User talk:Stompy|talk]])
wikitext
text/x-wiki
Agility has built several robots. Their humanoid robot is called [[Digit]].
{{infobox company
| name = Agility
| country = United States
| website_link = https://agilityrobotics.com/
| robots = [[Cassie]], [[Digit]]
}}
[[Category:Companies]]
[[Category:Stompy, Expand!]]
7dc90d05a7ef4e4d507732655b38676ae20c0474
525
521
2024-04-27T22:35:11Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
Here's the expanded and revised version of the article:
```
Agility Robotics is an American company specializing in the development and production of advanced robotic systems. The company has gained significant attention in the robotics industry for its innovative humanoid and bipedal robots, such as Cassie and Digit.
{{infobox company
| name = Agility Robotics
| country = United States
| website_link = https://agilityrobotics.com/
| robots = [[Cassie]], [[Digit]]
}}
Founded with the mission to revolutionize mobility and delivery solutions via robotics, Agility Robotics has developed technologies aimed at various applications, including package delivery, surveillance, industrial inspection, and mobility assistance.
==Products==
###Cassie
Cassie is a bipedal robot designed for robust ambulation in complex environments. Since its introduction, Cassie has been utilized in various research and development projects focused on improving robotic mobility and stability.
###Digit
Digit, the more advanced of the company's robots, expands on the capabilities introduced by Cassie. Digit includes arms to handle and manipulate objects, making it suitable for tasks in logistic operations and environments that require human-like interaction. Digit's design allows for navigating uneven terrain and climbing stairs, which makes it ideal for both indoor and outdoor tasks.
==Innovation and Impact==
Agility Robotics leverages state-of-the-art technologies in biomechanics and artificial intelligence to create humanoid robots capable of performing tasks that are challenging or dangerous for humans. Their robots are notable for their dynamic balance, which enables them to walk in a human-like manner, adapt to their environment, and recover from disturbances.
==Research and Development==
The company collaborates with academic institutions and commercial partners to push the boundaries of what is possible in robotics. These collaborations ensure continuous improvements in robotic design and functionality, thereby expanding the potential use cases for humanoid robots.
==Future Plans==
Agility Robotics continues to develop new technologies and improve upon their existing robot models. The company aims to enhance robotic interaction with the physical and social world, contributing to more automated and efficient working environments.
[[Category:Companies]]
[[Category:Robotics]]
```
This revised version includes expanded information on the company's mission, product details, and impact on the robotics field. The category has been corrected to include a specific segment on robotics, promoting better integration into related content within the wiki.
865c49b2670d98395b15c1dc32b71998eed5bc49
526
525
2024-04-27T22:36:40Z
Stompy
14
wikitext
text/x-wiki
Agility Robotics is an American company specializing in the development and production of advanced robotic systems. The company has gained significant attention in the robotics industry for its innovative humanoid and bipedal robots, such as Cassie and Digit.
{{infobox company
| name = Agility Robotics
| country = United States
| website_link = https://agilityrobotics.com/
| robots = [[Cassie]], [[Digit]]
}}
Founded with the mission to revolutionize mobility and delivery solutions via robotics, Agility Robotics has developed technologies aimed at various applications, including package delivery, surveillance, industrial inspection, and mobility assistance.
== Products ==
=== Cassie ===
[[Cassie]] is a bipedal robot designed for robust ambulation in complex environments. Since its introduction, Cassie has been utilized in various research and development projects focused on improving robotic mobility and stability.
=== Digit ===
[[Digit]], the more advanced of the company's robots, expands on the capabilities introduced by Cassie. Digit includes arms to handle and manipulate objects, making it suitable for tasks in logistic operations and environments that require human-like interaction. Digit's design allows for navigating uneven terrain and climbing stairs, which makes it ideal for both indoor and outdoor tasks.
== Innovation and Impact ==
Agility Robotics leverages state-of-the-art technologies in biomechanics and artificial intelligence to create humanoid robots capable of performing tasks that are challenging or dangerous for humans. Their robots are notable for their dynamic balance, which enables them to walk in a human-like manner, adapt to their environment, and recover from disturbances.
== Research and Development ==
The company collaborates with academic institutions and commercial partners to push the boundaries of what is possible in robotics. These collaborations ensure continuous improvements in robotic design and functionality, thereby expanding the potential use cases for humanoid robots.
== Future Plans ==
Agility Robotics continues to develop new technologies and improve upon their existing robot models. The company aims to enhance robotic interaction with the physical and social world, contributing to more automated and efficient working environments.
[[Category:Companies]]
c1c9efd4d3c6d52245faf68486c215ef9ac431e1
Kepler
0
93
536
396
2024-04-27T22:49:18Z
Stompy
14
wikitext
text/x-wiki
Kepler is building a humanoid robot called [[K1]].
{{infobox company
| name = Kepler
| country = China
| website_link = https://www.gotokepler.com/
| robots = [[K1]]
}}
[[Category:Companies]]
[[Category:Stompy, Expand!]]
260a8462d6e08ccd8a588b4338ffa610bd812483
537
536
2024-04-27T22:51:50Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
Here is the revised and expanded article incorporating the provided information and formatted according to MediaWiki guidelines:
```
{{infobox company
| name = Kepler
| country = China
| website_link = https://www.gotokepler.com/
| robots = [[K1]]
}}
== Overview ==
'''Kepler''' is a robotics company based in China, known for developing advanced humanoid robots, including the notable ''[[K1]]'' model. Kepler focuses on integrating cutting-edge technology to enhance robotic capabilities for practical applications in industries such as service, healthcare, and entertainment.
== History ==
The company was founded with the vision of creating highly functional humanoid robots that could perform a variety of tasks, ranging from simple domestic chores to complex industrial operations. Over the years, Kepler has invested significantly in research and development, making considerable advancements in robotic mobility, dexterity, and artificial intelligence.
== Products ==
=== [[K1]] ===
The main product currently highlighted in Kepler's lineup is the [[K1]] humanoid robot. This robot incorporates advanced features designed to interact with human environments effectively.
== Research and Development ==
Kepler continually invests in innovation to stay at the forefront of humanoid robotics technology. This includes developing sophisticaed control systems, improving human-robot interaction through AI, and enhancing the physical capabilities of robots in stability, agility, and strength.
== See Also ==
* [[Humanoid Robots]]
* [[Robotics in China]]
[[Category:Companies]]
[[Category:Robotics]]
```
e26948a59f3bc781e02d5a1246cf383931b25b38
538
537
2024-04-27T22:52:18Z
Stompy
14
wikitext
text/x-wiki
{{infobox company
| name = Kepler
| country = China
| website_link = https://www.gotokepler.com/
| robots = [[K1]]
}}
== Overview ==
Kepler is a robotics company based in China, known for developing advanced humanoid robots, including the notable [[K1]] model. Kepler focuses on integrating cutting-edge technology to enhance robotic capabilities for practical applications in industries such as service, healthcare, and entertainment.
== History ==
The company was founded with the vision of creating highly functional humanoid robots that could perform a variety of tasks, ranging from simple domestic chores to complex industrial operations. Over the years, Kepler has invested significantly in research and development, making considerable advancements in robotic mobility, dexterity, and artificial intelligence.
== Products ==
=== [[K1]] ===
The main product currently highlighted in Kepler's lineup is the [[K1]] humanoid robot. This robot incorporates advanced features designed to interact with human environments effectively.
== Research and Development ==
Kepler continually invests in innovation to stay at the forefront of humanoid robotics technology. This includes developing sophisticaed control systems, improving human-robot interaction through AI, and enhancing the physical capabilities of robots in stability, agility, and strength.
== See Also ==
* [[Humanoid Robots]]
* [[Robotics in China]]
[[Category:Companies]]
77cbae17990fac19eba20c76271683af675af365
539
538
2024-04-27T22:52:31Z
Stompy
14
wikitext
text/x-wiki
{{infobox company
| name = Kepler
| country = China
| website_link = https://www.gotokepler.com/
| robots = [[K1]]
}}
== Overview ==
Kepler is a robotics company based in China, known for developing advanced humanoid robots, including the notable [[K1]] model. Kepler focuses on integrating cutting-edge technology to enhance robotic capabilities for practical applications in industries such as service, healthcare, and entertainment.
== History ==
The company was founded with the vision of creating highly functional humanoid robots that could perform a variety of tasks, ranging from simple domestic chores to complex industrial operations. Over the years, Kepler has invested significantly in research and development, making considerable advancements in robotic mobility, dexterity, and artificial intelligence.
== Products ==
=== [[K1]] ===
The main product currently highlighted in Kepler's lineup is the [[K1]] humanoid robot. This robot incorporates advanced features designed to interact with human environments effectively.
== Research and Development ==
Kepler continually invests in innovation to stay at the forefront of humanoid robotics technology. This includes developing sophisticaed control systems, improving human-robot interaction through AI, and enhancing the physical capabilities of robots in stability, agility, and strength.
== See Also ==
* [[Robotics in China]]
[[Category:Companies]]
c94a9d1cf8d5bf056beedc7d8ddac1fc614e315c
540
539
2024-04-27T22:58:14Z
Stompy
14
wikitext
text/x-wiki
{{infobox company
| name = Kepler
| country = China
| website_link = https://www.gotokepler.com/
| robots = [[K1]]
}}
== Overview ==
Kepler is a robotics company based in China, known for developing advanced humanoid robots, including the notable [[K1]] model. Kepler focuses on integrating cutting-edge technology to enhance robotic capabilities for practical applications in industries such as service, healthcare, and entertainment.
== History ==
The company was founded with the vision of creating highly functional humanoid robots that could perform a variety of tasks, ranging from simple domestic chores to complex industrial operations. Over the years, Kepler has invested significantly in research and development, making considerable advancements in robotic mobility, dexterity, and artificial intelligence.
== Products ==
=== [[K1]] ===
The main product currently highlighted in Kepler's lineup is the [[K1]] humanoid robot. This robot incorporates advanced features designed to interact with human environments effectively.
== Research and Development ==
Kepler continually invests in innovation to stay at the forefront of humanoid robotics technology. This includes developing sophisticaed control systems, improving human-robot interaction through AI, and enhancing the physical capabilities of robots in stability, agility, and strength.
== See Also ==
* [[Robotics in China]]
[[Category:Companies]]
[[Category:Stompy, Expand!]]
469e52b4eeb53f3c367f0a8f4b9c16844a655ec5
541
540
2024-04-27T23:11:27Z
Stompy
14
wikitext
text/x-wiki
{{infobox company
| name = Kepler
| country = China
| website_link = https://www.gotokepler.com/
| robots = [[K1]]
}}
Kepler is a robotics company based in China, known for developing advanced humanoid robots, including the notable [[K1]] model. Kepler focuses on integrating cutting-edge technology to enhance robotic capabilities for practical applications in industries such as service, healthcare, and entertainment.
[[Category:Companies]]
[[Category:Stompy, Expand!]]
464cf3dc502931aabe63c997f0ebd1429613f42f
542
541
2024-04-27T23:51:52Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
Kepler, formally known as Kepler Exploration Robot Co., Ltd., is a robotics company headquartered in Shanghai, China. Specializing in advanced humanoid robots, Kepler has made headlines with its Forerunner humanoid robots, which were exhibited at the Consumer Electronics Show (CES) in Las Vegas in January 2024. The company's distinct focus on versatile, general-purpose humanoid robots marks its entrance into international markets competing with other major players in robotics such as Tesla's Optimus.
== History and Development ==
Kepler was founded with a specific mission to pioneer in the domains of service, healthcare, and entertainment robotics. Its significant impact began to be recognized shortly after its inception, when its human-like robots — known for their balance and interactive capabilities — were showcased at major international stages like CES, drawing attention from global technology leaders and media.
== Products and Services ==
Kepler's flagship robot, the [[K1]], falls within the company's Forerunner series. These robots are designed for a range of applications, including but not limited to companion services, elderly care, and customer service in various sectors. The Forerunner series distinguishes itself by its adaptability and human-like interactions, aiming to seamlessly integrate into daily human environments.
== Innovation and Technology ==
The technology behind Kepler's robots integrates advanced AI, robotics engineering, and materials science, positioning the company at the forefront of the humanoid robot industry. With its robots capable of complex tasks such as balancing and playing interactive games, Kepler maintains a strong commitment to enhancing robotic capabilities that cater to practical and dynamic human needs.
== Recognition and Impact ==
At CES 2024, Kepler's presence was not only a demonstration of technological advancements but also an affirmation of its growing influence in the robotics sector. Noteworthy visitors such as Microsoft's CEO Satya Nadella and representatives from major tech companies and research institutions like Nvidia, Google DeepMind, and MIT visited the Kepler exhibit, highlighting the strong industry and academic interest in Kepler's innovative approaches.
== Future Prospects ==
Looking forward, Kepler aims to continue pushing the boundaries of humanoid robotics with ongoing research and development efforts. The company is committed to enhancing the interactivity and utility of its robots, promising future generations of humanoid robots that could further revolutionize the way humans interact with machines.
[[Category:Companies]]
[[Category:Robotics]]
== References ==
* [https://ces.vporoom.com/2024-01-22-CES-2024-Spotlight-Keplers-Humanoid-Robot-Launch-Gains-International-Recognition CES 2024 Spotlight on Kepler's Humanoid Robot]
* [https://interestingengineering.com/ces-2024/chinese-humanoid-robot-fight-tesla-optimus Kepler's Humanoid Robot set to compete with Tesla Optimus at CES]
* [https://www.gotokepler.com/about About Kepler Robotics]
46793b278a1cecb485f6441d88a266cd97fd65ca
543
542
2024-04-27T23:52:55Z
Stompy
14
Undo revision 542 by [[Special:Contributions/Stompy|Stompy]] ([[User talk:Stompy|talk]])
wikitext
text/x-wiki
{{infobox company
| name = Kepler
| country = China
| website_link = https://www.gotokepler.com/
| robots = [[K1]]
}}
Kepler is a robotics company based in China, known for developing advanced humanoid robots, including the notable [[K1]] model. Kepler focuses on integrating cutting-edge technology to enhance robotic capabilities for practical applications in industries such as service, healthcare, and entertainment.
[[Category:Companies]]
[[Category:Stompy, Expand!]]
464cf3dc502931aabe63c997f0ebd1429613f42f
544
543
2024-04-27T23:57:43Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
{{infobox company
| name = Kepler
| country = China
| website_link = https://www.gotokepler.com/
| robots = [[K1]]
}}
Kepler is a robotics company based in China, prominently known for developing advanced humanoid robots, including the celebrated [[K1]] model. Established recently, Kepler has quickly gained recognition in the robotics industry for its cutting-edge technology and ambitious approach to enhancing robotic functionalities for a broad spectrum of applications within service sectors, healthcare, and entertainment realms.
In its brief existence, Kepler made a significant impact at the Consumer Electronics Show (CES) 2024, introducing its humanoid robots to the international scene. This debut was marked by enthusiasm from industry insiders and significant media attention, highlighting Kepler's robust research and development capabilities and their commitment to innovation. The Company's presence at CES attracted numerous high-profile visitors, such as executives from Microsoft and Nvidia, showcasing the wide interest in their technological advancements<ref>https://ces.vporoom.com/2024-01-22-CES-2024-Spotlight-Keplers-Humanoid-Robot-Launch-Gains-International-Recognition</ref><ref>https://www.prnewswire.com/news-releases/ces-2024-spotlight-keplers-humanoid-robot-launch-gains-international-recognition-302040175.html</ref>.
== Organization and Development ==
Kepler was founded with a clear focus on spearheading innovations in humanoid robotics. Situated in Shanghai, China, within the Torch Lotus Business Park, the company has positioned itself in an area known for technological advancements and commercial opulence<ref>https://www.gotokepler.com/about</ref>.
Kepler's organizational structure and strategic direction are geared towards leveraging artificial intelligence and engineering excellence to create robots that not only perform tasks but also interact and integrate smoothly into human environments.
== Products and Innovations ==
Kepler's product line is notably spearheaded by the [[K1]], a humanoid robot designed for general-purpose applications. This model embodies the company's philosophy of creating versatile, adaptable, and technologically sophisticated machines capable of serving various industry needs from personal care to complex industrial tasks. Details about the K1 and other products are prominently featured on Kepler's official website and through their promotional channels.
[[Category:Companies]]
894570b9eb52d506bb893b65c22844579e55131e
545
544
2024-04-27T23:58:53Z
Stompy
14
wikitext
text/x-wiki
{{infobox company
| name = Kepler
| country = China
| website_link = https://www.gotokepler.com/
| robots = [[K1]]
}}
Kepler is a robotics company based in China, prominently known for developing advanced humanoid robots, including the celebrated [[K1]] model. Established recently, Kepler has quickly gained recognition in the robotics industry for its cutting-edge technology and ambitious approach to enhancing robotic functionalities for a broad spectrum of applications within service sectors, healthcare, and entertainment realms.
In its brief existence, Kepler made a significant impact at the Consumer Electronics Show (CES) 2024, introducing its humanoid robots to the international scene. This debut was marked by enthusiasm from industry insiders and significant media attention, highlighting Kepler's robust research and development capabilities and their commitment to innovation. The Company's presence at CES attracted numerous high-profile visitors, such as executives from Microsoft and Nvidia, showcasing the wide interest in their technological advancements<ref>https://ces.vporoom.com/2024-01-22-CES-2024-Spotlight-Keplers-Humanoid-Robot-Launch-Gains-International-Recognition</ref><ref>https://www.prnewswire.com/news-releases/ces-2024-spotlight-keplers-humanoid-robot-launch-gains-international-recognition-302040175.html</ref>.
== Organization and Development ==
Kepler was founded with a clear focus on spearheading innovations in humanoid robotics. Situated in Shanghai, China, within the Torch Lotus Business Park, the company has positioned itself in an area known for technological advancements and commercial opulence<ref>https://www.gotokepler.com/about</ref>.
Kepler's organizational structure and strategic direction are geared towards leveraging artificial intelligence and engineering excellence to create robots that not only perform tasks but also interact and integrate smoothly into human environments.
== Products and Innovations ==
Kepler's product line is notably spearheaded by the [[K1]], a humanoid robot designed for general-purpose applications. This model embodies the company's philosophy of creating versatile, adaptable, and technologically sophisticated machines capable of serving various industry needs from personal care to complex industrial tasks. Details about the K1 and other products are prominently featured on Kepler's official website and through their promotional channels.
== References ==
<references />
[[Category:Companies]]
e3cc78ead7ae5b755f94f9a2d1270f04af9ccae6
H1
0
3
546
254
2024-04-27T23:59:59Z
Stompy
14
wikitext
text/x-wiki
It is available for purchase [https://shop.unitree.com/products/unitree-h1 here].
{{infobox robot
| name = H1
| organization = [[Unitree]]
| video_link = https://www.youtube.com/watch?v=83ShvgtyFAg
| cost = USD 150,000
| purchase_link = https://shop.unitree.com/products/unitree-h1
}}
[[Category:Robots]]
[[Category:Stompy, Expand!]]
96aba9136a66719fa817bb8b4300e2c29d5bdf4b
547
546
2024-04-28T00:00:29Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
The Unitree H1 is a full-size, high-power performance humanoid robot developed by Unitree Robotics. It is notable for its advanced powertrain which facilitates unprecedented speed, power, maneuverability, and flexibility in its class.
== Overview ==
The H1 possesses a robust structure with a body size of approximately 180 cm in height and weighs around 47 kg. It combines high-speed mobility and versatile motion capabilities, making it one of the most capable robots in its category. The robot is equipped with a highly advanced powertrain system that enables a moving speed of up to 3.3 meters per second.
== Specifications ==
* '''Height:''' Approximately 180 cm
* '''Weight:''' Approximately 47 kg
* '''Moving Speed:''' Up to 3.3 m/s
* '''Peak Torque Density:''' 189 N.m/kg
* '''Battery Capacity:''' 864 Wh (quickly replaceable)
* '''Maximum Joint Torque:''' 360 N.m
* '''Sensory Equipment:''' Includes 360° Depth Sensing, 3D LIDAR, and a Depth Camera
== Applications ==
The H1 is designed as a universal humanoid robot, suitable for various applications including industrial automation, entertainment, research, and public services. Its exceptional torque and motion capabilities allow it to perform complex tasks that require human-like dexterity and robustness.
== Purchase Information ==
The H1 robot is available for purchase at a cost of USD 150,000. Further details and purchase options can be found on the Unitree's official [https://shop.unitree.com/products/unitree-h1 website].
== References ==
<references />
d619f05d57a91580d4a60a22478e2082dce9f825
548
547
2024-04-28T00:00:51Z
Stompy
14
Undo revision 547 by [[Special:Contributions/Stompy|Stompy]] ([[User talk:Stompy|talk]])
wikitext
text/x-wiki
It is available for purchase [https://shop.unitree.com/products/unitree-h1 here].
{{infobox robot
| name = H1
| organization = [[Unitree]]
| video_link = https://www.youtube.com/watch?v=83ShvgtyFAg
| cost = USD 150,000
| purchase_link = https://shop.unitree.com/products/unitree-h1
}}
[[Category:Robots]]
[[Category:Stompy, Expand!]]
96aba9136a66719fa817bb8b4300e2c29d5bdf4b
549
548
2024-04-28T00:02:21Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
{{infobox robot
| name = H1
| organization = [[Unitree]]
| video_link = https://www.youtube.com/watch?v=83ShvgtyFAg
| cost = USD 150,000
| height = 180 cm
| weight = 47 kg
| speed = >3.3 m/s
| lift_force =
| battery_life =
| battery_capacity = 864 Wh
| purchase_link = https://shop.unitree.com/products/unitree-h1
| number_made =
| dof =
| status =
}}
'''Unitree H1''' is a full-size universal humanoid robot developed by the [[Unitree]], a company known for its innovative robotic designs. The H1 is celebrated for its superior power performance capabilities and advanced powertrain technologies.
== Specifications ==
The H1 robot stands approximately 180 cm tall and weighs around 47 kg, offering high mobility and physical capabilities. Some of the standout specifications of the H1 include:
* Maximum speed: Exceeds 3.3 meters per second, a benchmark in robot mobility.
* Weight: Approximately 47 kg.
* Maximum joint torque: 360 N.m.
* Battery capacity: 864 Wh, which is quickly replaceable, enhancing the robot's operational endurance.
== Features ==
The H1 incorporates advanced technologies to achieve its high functionality:
* Highly efficient powertrain for superior speed, power, and maneuverability.
* Equipped with high-torque joint motors developed by [[Unitree]] itself.
* 360° depth sensing capabilities combined with LIDAR and depth cameras for robust environmental perception.
== Uses and Applications ==
While detailed use cases of Unitree H1 are not extensively documented, the robot's build and capabilities suggest its suitability in complicated tasks requiring human-like dexterity and strength which might include industrial applications, complex terrain navigation, and interactive tasks.
== References ==
<references />
c975f7fcb77131d19cb061b6635144da2bd1dd39
550
549
2024-04-28T00:03:47Z
Stompy
14
wikitext
text/x-wiki
'''Unitree H1''' is a full-size universal humanoid robot developed by the [[Unitree]], a company known for its innovative robotic designs. The H1 is celebrated for its superior power performance capabilities and advanced powertrain technologies.
{{infobox robot
| name = H1
| organization = [[Unitree]]
| video_link = https://www.youtube.com/watch?v=83ShvgtyFAg
| cost = USD 150,000
| height = 180 cm
| weight = 47 kg
| speed = >3.3 m/s
| lift_force =
| battery_life =
| battery_capacity = 864 Wh
| purchase_link = https://shop.unitree.com/products/unitree-h1
| number_made =
| dof =
| status =
}}
== Specifications ==
The H1 robot stands approximately 180 cm tall and weighs around 47 kg, offering high mobility and physical capabilities. Some of the standout specifications of the H1 include:
* Maximum speed: Exceeds 3.3 meters per second, a benchmark in robot mobility.
* Weight: Approximately 47 kg.
* Maximum joint torque: 360 N.m.
* Battery capacity: 864 Wh, which is quickly replaceable, enhancing the robot's operational endurance.
== Features ==
The H1 incorporates advanced technologies to achieve its high functionality:
* Highly efficient powertrain for superior speed, power, and maneuverability.
* Equipped with high-torque joint motors developed by [[Unitree]] itself.
* 360° depth sensing capabilities combined with LIDAR and depth cameras for robust environmental perception.
== Uses and Applications ==
While detailed use cases of Unitree H1 are not extensively documented, the robot's build and capabilities suggest its suitability in complicated tasks requiring human-like dexterity and strength which might include industrial applications, complex terrain navigation, and interactive tasks.
== References ==
<references />
ade825152dca014944b303fd0031fd13fe6ca023
Unitree
0
6
551
217
2024-04-28T00:04:06Z
Stompy
14
wikitext
text/x-wiki
Unitree is a company based out of China which has built a number of different types of robots.
{{infobox company
| name = Unitree
| country = China
| website_link = https://www.unitree.com/
| robots = [[H1]]
}}
[[Category:Companies]]
[[Category:Stompy, Expand!]]
873bc6b752e1a459154649decec3882362ba82b5
552
551
2024-04-28T00:04:29Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
Unitree is a company based out of China which has built a number of different types of robots.
{{infobox company
| name = Unitree
| country = China
| website_link = https://www.unitree.com/
| robots = [[H1]], [[Go2]], [[B2]]
}}
Unitree Robotics is pioneering in the field of quadruped robots, known for their adaptability in complex terrain compared to traditional wheeled robots. They have developed multiple types of robots, such as H1, Go2, and B2.
== Products ==
=== H1 ===
The Unitree H1 is noted for its embodiment of advanced AI, efficiency, and high-power performance, ranking among the highest in similar specifications worldwide. It weighs around 47 Kg and has a maximum joint torque of 360 N.m.
=== Go2 ===
Unitree Go2 is an advanced robot equipped with standard 4D Ultra-wide LIDAR and GPT empowerment, which significantly enhances its terrain recognition capabilities. Designed to work in a variety of environments, Go2 pushes the boundaries of technological innovation with its ability to traverse complex terrains effectively.
=== B2 ===
Unitree B2 is built to be a heavy loader with exceptional utility potential in industries such as agriculture. It is designed to perform in demanding conditions, ensuring efficiency in critical operations.
Through these groundbreaking products, Unitree continues to lead in the robotics industry by focusing on innovation and the practical applications of its technology in real-world situations<ref>https://www.unitree.com/</ref><ref>https://shop.unitree.com/products/unitree-go2</ref><ref>https://m.unitree.com/go2/</ref>.
[[Category:Companies]]
[[Category:Robotics Companies]]
== References ==
<references />
9ce7b02974fb0fefbb77e51c071b8d79de775a0c
553
552
2024-04-28T00:04:54Z
Stompy
14
wikitext
text/x-wiki
Unitree is a company based out of China which has built a number of different types of robots.
{{infobox company
| name = Unitree
| country = China
| website_link = https://www.unitree.com/
| robots = [[H1]], [[Go2]], [[B2]]
}}
Unitree Robotics is pioneering in the field of quadruped robots, known for their adaptability in complex terrain compared to traditional wheeled robots. They have developed multiple types of robots, such as H1, Go2, and B2.
== Products ==
=== H1 ===
The Unitree H1 is noted for its embodiment of advanced AI, efficiency, and high-power performance, ranking among the highest in similar specifications worldwide. It weighs around 47 Kg and has a maximum joint torque of 360 N.m.
=== Go2 ===
Unitree Go2 is an advanced robot equipped with standard 4D Ultra-wide LIDAR and GPT empowerment, which significantly enhances its terrain recognition capabilities. Designed to work in a variety of environments, Go2 pushes the boundaries of technological innovation with its ability to traverse complex terrains effectively.
=== B2 ===
Unitree B2 is built to be a heavy loader with exceptional utility potential in industries such as agriculture. It is designed to perform in demanding conditions, ensuring efficiency in critical operations.
Through these groundbreaking products, Unitree continues to lead in the robotics industry by focusing on innovation and the practical applications of its technology in real-world situations<ref>https://www.unitree.com/</ref><ref>https://shop.unitree.com/products/unitree-go2</ref><ref>https://m.unitree.com/go2/</ref>.
== References ==
<references />
[[Category:Companies]]
[[Category:Robotics Companies]]
769d2a7485a8c98de6436744412a906e23c3dc0f
554
553
2024-04-28T00:05:31Z
Stompy
14
wikitext
text/x-wiki
Unitree is a company based out of China which has built a number of different types of robots.
{{infobox company
| name = Unitree
| country = China
| website_link = https://www.unitree.com/
| robots = [[H1]], [[Go2]], [[B2]]
}}
Unitree Robotics is pioneering in the field of quadruped robots, known for their adaptability in complex terrain compared to traditional wheeled robots. They have developed multiple types of robots, such as H1, Go2, and B2.
== Products ==
=== H1 ===
The Unitree H1 is noted for its embodiment of advanced AI, efficiency, and high-power performance, ranking among the highest in similar specifications worldwide. It weighs around 47 Kg and has a maximum joint torque of 360 N.m.
=== Go2 ===
Unitree Go2 is an advanced robot equipped with standard 4D Ultra-wide LIDAR and GPT empowerment, which significantly enhances its terrain recognition capabilities. Designed to work in a variety of environments, Go2 pushes the boundaries of technological innovation with its ability to traverse complex terrains effectively.
=== B2 ===
Unitree B2 is built to be a heavy loader with exceptional utility potential in industries such as agriculture. It is designed to perform in demanding conditions, ensuring efficiency in critical operations.
Through these groundbreaking products, Unitree continues to lead in the robotics industry by focusing on innovation and the practical applications of its technology in real-world situations<ref>https://www.unitree.com/</ref><ref>https://shop.unitree.com/products/unitree-go2</ref><ref>https://m.unitree.com/go2/</ref>.
== References ==
<references />
[[Category:Companies]]
cca8ca1b32fd8c305017090270f228b98f236cb4
GR-1
0
57
555
453
2024-04-28T00:06:06Z
Stompy
14
wikitext
text/x-wiki
GR-1 is a self-developed and mass-produced humanoid robot by [[Fourier Intelligence]]. It has a highly bionic torso and human-like motion control capabilities, with up to 54 Degrees of Freedom (DoFs) across its form. GR-1 can walk briskly, adroitly avoid obstacles, stably descend a slope, and withstand disruptions, making it an ideal physical agent of artificial general intelligence (AGI).
{{infobox robot
| name = GR-1
| organization = [[Fourier Intelligence]]
| height = 165 cm
| weight = 55 kg
| video_link = https://www.youtube.com/watch?v=SHPxcRBlXN0
| cost = USD 149,999
}}
[[Category:Robots]]
[[Category:Stompy, Expand!]]
46127ec7215edbbeaec4c6ce5e0eab82fccfe914
556
555
2024-04-28T00:08:47Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
The GR-1 is a cutting-edge humanoid robot developed and mass-produced by Fourier Intelligence, a company renowned for its contributions to the field of rehabilitation robots and artificial intelligence. This robot was officially unveiled at the 2023 World Artificial Intelligence Conference in Shanghai, marking a significant milestone as one of China's first commercially available humanoid robots.
== Overview ==
The GR-1 robot stands out with its highly bionic torso and sophisticated human-like motion control capabilities. Designed with an impressive 54 Degrees of Freedom (DoFs) distributed across its form, the GR-1 is engineered to perform a variety of complex movements and tasks. The robot’s physical specifications include a height of 165 cm and a weight of 55 kg, dimensions that contribute to its human-like appearance and capabilities.
== Capabilities ==
GR-1 is programmed to perform with high efficiency in various environments. It can briskly walk, adroitly avoid obstacles, stably descend slopes, and withstand physical disruptions. These capabilities make the GR-1 an ideal physical agent of artificial general intelligence (AGI), addressing needs in sectors like caregiving and physical therapy where adaptability and resilience are crucial.
=== Degrees of Freedom ===
The robot's 54 DoFs allow for a remarkably fluid range of movement, closely mimicking human motion. This high degree of articulation is vital for tasks that require a high level of precision and dexterity.
=== Strength and Endurance ===
GR-1 is equipped for substantial physical exertion, boasting a peak joint torque of 230N.m. This strength permits the robot not only to support its own weight but also to carry objects nearly equal to its weight, a notable achievement in humanoid robotics.
== Commercial Use and Impact ==
As the world's first mass-produced humanoid robot, the GR-1 by Fourier Intelligence represents a significant advancement in the field of robotics. Its launch is not just a commercial achievement but also a technological leap forward, signaling the potential for humanoid robots to become more integrated into daily tasks and industrial applications.
== References ==
<references />
[[Category:Robots]]
[[Category:Humanoid Robots]]
003cb2694f614f2db1ae453a59588fe51a8a7c1d
557
556
2024-04-28T00:09:04Z
Stompy
14
Undo revision 556 by [[Special:Contributions/Stompy|Stompy]] ([[User talk:Stompy|talk]])
wikitext
text/x-wiki
GR-1 is a self-developed and mass-produced humanoid robot by [[Fourier Intelligence]]. It has a highly bionic torso and human-like motion control capabilities, with up to 54 Degrees of Freedom (DoFs) across its form. GR-1 can walk briskly, adroitly avoid obstacles, stably descend a slope, and withstand disruptions, making it an ideal physical agent of artificial general intelligence (AGI).
{{infobox robot
| name = GR-1
| organization = [[Fourier Intelligence]]
| height = 165 cm
| weight = 55 kg
| video_link = https://www.youtube.com/watch?v=SHPxcRBlXN0
| cost = USD 149,999
}}
[[Category:Robots]]
[[Category:Stompy, Expand!]]
46127ec7215edbbeaec4c6ce5e0eab82fccfe914
558
557
2024-04-28T00:10:21Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
The GR-1 is a self-developed and mass-produced humanoid robot designed and manufactured by [[Fourier Intelligence]], a company based in Singapore that specializes in rehabilitation robots and artificial intelligence. Known for its anthropomorphic design, the GR-1 boasts a bionic torso and exhibits highly advanced human-like motion control capabilities.
== Overview ==
The GR-1 is an innovative representation in the field of humanoid robots with its design philosophy focused on emulating human movements and behaviors. The robot is equipped with 54 Degrees of Freedom (DoFs) which play a crucial role in its movement and maneuverability capabilities. It is able to perform complex movements such as brisk walking, adroitly avoiding obstacles, stable descent down slopes, and maintaining steadiness upon disruptions. These features combined with its agility and durability make GR-1 an excellent candidate for applications in artificial general intelligence (AGI).
== Specifications ==
The GR-1 stands at 1.65 meters tall and weighs 55 kilograms, with a distributed system of 40 to 54 DoFs enabling its sophisticated movement dynamics. A highlight of its technological prowess is the integration of peak joint torques reaching 230N.m, ensuring strength and precision in its operations. Another significant feature includes the advanced joint module fitted at the hip which allows for enhanced locomotive abilities. The structural and mechanical configurations underline the robot's sophisticated design, blending electronic and mechanical engineering with cutting-edge software technologies.
== Production & Applications ==
Fourier Intelligence plans to manufacture 100 units of the GR-1 by the end of 2023, showcasing its commitment to bringing advanced humanoid robots to the commercial market. The robot's design caters to a variety of applications, particularly emphasizing roles in caregiving and physical therapy. These application domains benefit from GR-1's stable and human-like movement which is essential in environments requiring precise and gentle interactions.
== References ==
<references />
{{DEFAULTSORT:GR-1}}
[[Category:Robots]]
[[Category:Humanoid Robots]]
[[Category:Fourier Intelligence Robotics]]
8ff8963b7958c44c80e40b0aa1541d0d03761be1
559
558
2024-04-28T00:10:45Z
Stompy
14
Undo revision 558 by [[Special:Contributions/Stompy|Stompy]] ([[User talk:Stompy|talk]])
wikitext
text/x-wiki
GR-1 is a self-developed and mass-produced humanoid robot by [[Fourier Intelligence]]. It has a highly bionic torso and human-like motion control capabilities, with up to 54 Degrees of Freedom (DoFs) across its form. GR-1 can walk briskly, adroitly avoid obstacles, stably descend a slope, and withstand disruptions, making it an ideal physical agent of artificial general intelligence (AGI).
{{infobox robot
| name = GR-1
| organization = [[Fourier Intelligence]]
| height = 165 cm
| weight = 55 kg
| video_link = https://www.youtube.com/watch?v=SHPxcRBlXN0
| cost = USD 149,999
}}
[[Category:Robots]]
[[Category:Stompy, Expand!]]
46127ec7215edbbeaec4c6ce5e0eab82fccfe914
560
559
2024-04-28T00:11:40Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
The GR-1 is a highly sophisticated humanoid robot designed and mass-produced by [[Fourier Intelligence]]. With its highly bionic torso and human-like motion control capabilities, it initiates a new era in the field of robotics. The GR-1 features up to 54 Degrees of Freedom (DoFs) throughout its form, making it capable of performing complex actions such as brisk walking, adeptly avoiding obstacles, stably descending slopes, and withstanding disruptions<ref>https://fourierintelligence.com/gr1/</ref>.
Built with a peak joint torque of 230N.m, GR-1 shows a considerable strength, enabling it to carry items nearly as heavy as its own weight<ref>https://newatlas.com/robotics/fourier-gr1-humanoid-robot/</ref>. Its innovative design, combined with its advanced capabilities, puts it on the forefront of artificial general intelligence (AGI) physical embodiments.
Fourier Intelligence plans to manufacture 100 of its GR-1 humanoid robots by the end of 2023<ref>https://newatlas.com/robotics/fourier-gr1-humanoid-robot/</ref>. The mass production of this complex robot is a testament to the growing capabilities of robotics manufacturing and an indicator of how humanoid robots are becoming more widely accessible.
== Specifications ==
The GR-1 measures 1.65m in height and weights 55kg<ref>https://www.therobotreport.com/fourier-intelligence-launches-production-version-of-gr-1-humanoid-robot/</ref>. The robot's joint module at the hip is specially designed for enhanced mobility and stability. These measurements, combined with its DoFs and AI capabilities, equip the GR-1 to perform tasks with human-like proficiency.
== Affordability ==
The GR-1 humanoid robot is priced at USD 149,999, making it a significant investment. However, considering its advanced features and capabilities, it provides immense value for researchers, industries, and possibly consumers in the future.
{{infobox robot
| name = GR-1
| organization = [[Fourier Intelligence]]
| cost = USD 149,999
| height = 165 cm
| weight = 55 kg
| speed =
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| number_made = 100 (planned by end of 2023)
| dof = 54
| status =
| video_link = https://www.youtube.com/watch?v=SHPxcRBlXN0
}}
[[Category:Robots]]
== References ==
<references />
694c4d310ff6049fd55cb9d38fda5485b9d7bdfd
561
560
2024-04-28T00:12:41Z
Stompy
14
wikitext
text/x-wiki
{{infobox robot
| name = GR-1
| organization = [[Fourier Intelligence]]
| cost = USD 149,999
| height = 165 cm
| weight = 55 kg
| speed =
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| number_made = 100 (planned by end of 2023)
| dof = 54
| status =
| video_link = https://www.youtube.com/watch?v=SHPxcRBlXN0
}}
The GR-1 is a highly sophisticated humanoid robot designed and mass-produced by [[Fourier Intelligence]]. With its highly bionic torso and human-like motion control capabilities, it initiates a new era in the field of robotics. The GR-1 features up to 54 Degrees of Freedom (DoFs) throughout its form, making it capable of performing complex actions such as brisk walking, adeptly avoiding obstacles, stably descending slopes, and withstanding disruptions<ref>https://fourierintelligence.com/gr1/</ref>.
Built with a peak joint torque of 230N.m, GR-1 shows a considerable strength, enabling it to carry items nearly as heavy as its own weight<ref>https://newatlas.com/robotics/fourier-gr1-humanoid-robot/</ref>. Its innovative design, combined with its advanced capabilities, puts it on the forefront of artificial general intelligence (AGI) physical embodiments.
Fourier Intelligence plans to manufacture 100 of its GR-1 humanoid robots by the end of 2023<ref>https://newatlas.com/robotics/fourier-gr1-humanoid-robot/</ref>. The mass production of this complex robot is a testament to the growing capabilities of robotics manufacturing and an indicator of how humanoid robots are becoming more widely accessible.
== Specifications ==
The GR-1 measures 1.65m in height and weights 55kg<ref>https://www.therobotreport.com/fourier-intelligence-launches-production-version-of-gr-1-humanoid-robot/</ref>. The robot's joint module at the hip is specially designed for enhanced mobility and stability. These measurements, combined with its DoFs and AI capabilities, equip the GR-1 to perform tasks with human-like proficiency.
== Affordability ==
The GR-1 humanoid robot is priced at USD 149,999, making it a significant investment. However, considering its advanced features and capabilities, it provides immense value for researchers, industries, and possibly consumers in the future.
== References ==
<references />
[[Category:Robots]]
[[Category:Stompy, Expand!]]
64e758b5e6e2904de9cc810fbc4f9d0b76f7ea23
562
561
2024-04-28T00:14:18Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
{{infobox robot
| name = GR-1
| organization = [[Fourier Intelligence]]
| cost = USD 149,999
| height = 165 cm
| weight = 55 kg
| speed = 5 km/h<ref name="speed">https://newatlas.com/robotics/fourier-gr1-humanoid-robot/</ref>
| lift_force = Able to lift nearly its own weight<ref name="lift">https://newatlas.com/robotics/fourier-gr1-humanoid-robot/</ref>
| battery_life = Information not available
| battery_capacity = Information not available
| purchase_link = Information not available
| number_made = 100 (planned by end of 2023)
| dof = 54
| status = In mass-production<ref name="status">https://finance.yahoo.com/news/chinese-startup-debuts-world-first-130000324.html</ref>
| video_link = https://www.youtube.com/watch?v=SHPxcRBlXN0
}}
The GR-1 is a highly sophisticated humanoid robot designed and mass-produced by [[Fourier Intelligence]]. With its highly bionic torso and human-like motion control capabilities, it initiates a new era in the field of robotics. The GR-1 features up to 54 Degrees of Freedom (DoFs) throughout its form, allowing it to perform complex actions such as brisk walking at around 5 km/h, adeptly avoiding obstacles, and stably descending slopes<ref name="speed"></ref>. The GR-1 robot has been designed to withstanding disruptions<ref>https://fourierintelligence.com/gr1/</ref>.
Equipped with a peak joint torque of 230N.m, GR-1 demonstrates considerable strength, enabling it to carry items nearly as heavy as its own weight<ref name="lift"></ref>. Its innovative design, combined with its advanced capabilities, puts it on the forefront of artificial general intelligence (AGI) physical embodiments.
Fourier Intelligence plans to manufacture 100 of its GR-1 humanoid robots by the end of 2023<ref>https://newatlas.com/robotics/fourier-gr1-humanoid-robot/</ref>. The mass-production of this complex robot is a testament to the growing capabilities of robotics manufacturing, indicating the increasing accessibility of humanoid robots.
== Specifications ==
The GR-1 measures 1.65m in height and weights 55kg<ref>https://www.therobotreport.com/fourier-intelligence-launches-production-version-of-gr-1-humanoid-robot/</ref>. Its joint module at the hip is specially designed for enhanced mobility and stability. These measurements, combined with its 54 DoFs and AI capabilities, equip the GR-1 to perform tasks with human-like proficiency.
== Affordability ==
Priced at USD 149,999, the GR-1 humanoid robot is a significant investment. However, considering its advanced features and capabilities, it offers immense value for researchers, industries, and potentially consumers in the future.
== References ==
<references />
[[Category:Robots]]
a53229fd23f4c81b206abc193ed117415faac5b8
GR-1
0
57
563
562
2024-04-28T00:14:48Z
Stompy
14
wikitext
text/x-wiki
{{infobox robot
| name = GR-1
| organization = [[Fourier Intelligence]]
| cost = USD 149,999
| height = 165 cm
| weight = 55 kg
| speed = 5 km/h<ref name="speed">https://newatlas.com/robotics/fourier-gr1-humanoid-robot/</ref>
| lift_force = Able to lift nearly its own weight<ref name="lift">https://newatlas.com/robotics/fourier-gr1-humanoid-robot/</ref>
| battery_life =
| battery_capacity =
| purchase_link =
| number_made = 100 (planned by end of 2023)
| dof = 54
| status = In mass-production<ref name="status">https://finance.yahoo.com/news/chinese-startup-debuts-world-first-130000324.html</ref>
| video_link = https://www.youtube.com/watch?v=SHPxcRBlXN0
}}
The GR-1 is a highly sophisticated humanoid robot designed and mass-produced by [[Fourier Intelligence]]. With its highly bionic torso and human-like motion control capabilities, it initiates a new era in the field of robotics. The GR-1 features up to 54 Degrees of Freedom (DoFs) throughout its form, allowing it to perform complex actions such as brisk walking at around 5 km/h, adeptly avoiding obstacles, and stably descending slopes<ref name="speed"></ref>. The GR-1 robot has been designed to withstanding disruptions<ref>https://fourierintelligence.com/gr1/</ref>.
Equipped with a peak joint torque of 230N.m, GR-1 demonstrates considerable strength, enabling it to carry items nearly as heavy as its own weight<ref name="lift"></ref>. Its innovative design, combined with its advanced capabilities, puts it on the forefront of artificial general intelligence (AGI) physical embodiments.
Fourier Intelligence plans to manufacture 100 of its GR-1 humanoid robots by the end of 2023<ref>https://newatlas.com/robotics/fourier-gr1-humanoid-robot/</ref>. The mass-production of this complex robot is a testament to the growing capabilities of robotics manufacturing, indicating the increasing accessibility of humanoid robots.
== Specifications ==
The GR-1 measures 1.65m in height and weights 55kg<ref>https://www.therobotreport.com/fourier-intelligence-launches-production-version-of-gr-1-humanoid-robot/</ref>. Its joint module at the hip is specially designed for enhanced mobility and stability. These measurements, combined with its 54 DoFs and AI capabilities, equip the GR-1 to perform tasks with human-like proficiency.
== Affordability ==
Priced at USD 149,999, the GR-1 humanoid robot is a significant investment. However, considering its advanced features and capabilities, it offers immense value for researchers, industries, and potentially consumers in the future.
== References ==
<references />
[[Category:Robots]]
725ccdbb378afcaedff5b45947674237ef0dd312
SPIN Servo
0
90
564
391
2024-04-28T00:16:58Z
Stompy
14
wikitext
text/x-wiki
The [https://github.com/atopile/spin-servo-drive SPIN Servo] as an open-source actuator developed by [https://atopile.io/ atopile].
[[File:Spin.jpg|thumb]]
{{infobox actuator
| name = SPIN Servo
| manufacturer = Holry Motor
| cost = USD 30 (BOM)
| purchase_link = https://shop.atopile.io/
| nominal_torque = 0.125nm
| peak_torque = 0.375nm
| weight = 311.6g
| dimensions = 42mmx42mmx60mm
| gear_ratio = Direct drive (bolt on options)
| voltage = 12V-24V
| cad_link = https://github.com/atopile/spin-servo-drive/tree/main/mech
| interface = CAN bus, i2c
}}
[[Category:Actuators]]
[[Category:Stompy, Expand!]]
c882ffdbb4ccead331837b812dbb40ca1ce367fd
565
564
2024-04-28T00:17:28Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
== Overview ==
The SPIN Servo is an open-source hardware project, designed to make the use of fully-fledged Brushless DC (BLDC) servo motors easy and cost-effective<ref>[https://github.com/atopile/spin-servo-drive GitHub - atopile/spin-servo-drive: SPIN - Servos are awesome]</ref>. It is primarily engineered by atopile, which is known for its toolchains to describe electronic circuit boards with code<ref>[https://atopile.io/spin/ SPIN - atopile]</ref>. The intention behind this project is to introduce software development workflows like reuse, validation, and automation into the world of electronics.
The SPIN Servo is manufactured by the Holry Motor company. It weighs 311.6g and its dimensions are 42mm x 42mm x 60mm. The cost of the Bill of Materials (BOM) is USD 30. The nominal torque of the spin servo is 0.125nm, while it can peak at 0.375nm. The interface employed by this servo is CAN bus and i2c.
It operates at a voltage of 12V-24V and its gear ratio is directly driven, although there are bolt-on options available. All designs and schematics related to the SPIN Servo can be found at their official GitHub repository<ref>[https://github.com/atopile/spin-servo-drive/tree/main/mech SPIN Servo CAD - GitHub]</ref>.
Interested individuals can purchase the SPIN Servo from the official atopile shop<ref>[https://shop.atopile.io/ Atopile Shop]</ref>.
{{infobox actuator
| name = SPIN Servo
| manufacturer = Holry Motor
| cost = USD 30 (BOM)
| purchase_link = https://shop.atopile.io/
| nominal_torque = 0.125nm
| peak_torque = 0.375nm
| weight = 311.6g
| dimensions = 42mmx42mmx60mm
| gear_ratio = Direct drive (bolt on options)
| voltage = 12V-24V
| cad_link = https://github.com/atopile/spin-servo-drive/tree/main/mech
| interface = CAN bus, i2c
}}
[[Category:Actuators]]
== References ==
<references />
605bbc904b4b9119bef1a57493aed73370b9ed5c
566
565
2024-04-28T00:18:33Z
Stompy
14
wikitext
text/x-wiki
{{infobox actuator
| name = SPIN Servo
| manufacturer = Holry Motor
| cost = USD 30 (BOM)
| purchase_link = https://shop.atopile.io/
| nominal_torque = 0.125nm
| peak_torque = 0.375nm
| weight = 311.6g
| dimensions = 42mmx42mmx60mm
| gear_ratio = Direct drive (bolt on options)
| voltage = 12V-24V
| cad_link = https://github.com/atopile/spin-servo-drive/tree/main/mech
| interface = CAN bus, i2c
}}
== Overview ==
The SPIN Servo is an open-source hardware project, designed to make the use of fully-fledged Brushless DC (BLDC) servo motors easy and cost-effective<ref>[https://github.com/atopile/spin-servo-drive GitHub - atopile/spin-servo-drive: SPIN - Servos are awesome]</ref>. It is primarily engineered by atopile, which is known for its toolchains to describe electronic circuit boards with code<ref>[https://atopile.io/spin/ SPIN - atopile]</ref>. The intention behind this project is to introduce software development workflows like reuse, validation, and automation into the world of electronics.
The SPIN Servo is manufactured by the Holry Motor company. It weighs 311.6g and its dimensions are 42mm x 42mm x 60mm. The cost of the Bill of Materials (BOM) is USD 30. The nominal torque of the spin servo is 0.125nm, while it can peak at 0.375nm. The interface employed by this servo is CAN bus and i2c.
It operates at a voltage of 12V-24V and its gear ratio is directly driven, although there are bolt-on options available. All designs and schematics related to the SPIN Servo can be found at their official GitHub repository<ref>[https://github.com/atopile/spin-servo-drive/tree/main/mech SPIN Servo CAD - GitHub]</ref>.
Interested individuals can purchase the SPIN Servo from the official atopile shop<ref>[https://shop.atopile.io/ Atopile Shop]</ref>.
== References ==
<references />
[[Category:Actuators]]
a841dc657dbc4beb8f2634a45834612233b7fe2a
Deep Robotics
0
76
567
320
2024-04-28T00:19:53Z
Stompy
14
wikitext
text/x-wiki
Deep Robotics is building a humanoid robot called [[Wukong-IV]].
{{infobox company
| name = Deep Robotics
| country = China
| website_link = https://www.deeprobotics.cn/en
| robots = [[Wukong-IV]]
}}
[[Category:Companies]]
[[Category:Stompy, Expand!]]
d9d1dc40d83ea51b214f3876661a1b080528efe7
568
567
2024-04-28T00:20:25Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
Deep Robotics is a Chinese company known for its development of humanoid robots, including the [[Wukong-IV]] humanoid robot. As a leader in global quadruped robotics, Deep Robotics aims to serve the education, research, and entertainment markets with its advanced robotic technology<ref>DEEP Robotics showcases new X30 quadruped robot at GITEX GLOBAL 2023. (https://www.deeprobotics.cn/en/)</ref><ref>DEEP Robotics Launches Lite3 Quadruped Robot With Front Flipping ... (https://www.robotics247.com/article/deep_robotics_launches_lite3_quadruped_robot_front_flipping_mobility)</ref>.
The company is not just limited to humanoid robots, it's also involved in the design and application of quadrupedal robots, including state-of-the-art names like Lite3 and X30. These robots can maneuver through various obstacles, navigate stairs, and function in extreme weather and lighting conditions, with applications in industries such as power, factory, pipeline, and research fields<ref>X30 is a bionic quadruped robot that can traverse obstacles. (https://www.therobotreport.com/deep-robotics-launches-x30-quadruped-robot-to-meet-varied-industry-needs/)</ref>.
== Wukong-IV ==
[[Wukong-IV|Wukong-IV]], the humanoid robot developed by Deep Robotics, stands at 1.4 meters, weighs around 45 kg, and is actuated by electric motor joints. It features 6 degrees of freedom (DoF) on each leg and 4 DoFs on each arm. As a bionic humanoid robot, Wukong-IV's joint configuration and is capable of maintaining balance and stepping in place<ref>Hybrid Bipedal Locomotion Based on Reinforcement Learning and Heuristics. (https://pdfs.semanticscholar.org/f4d3/80f8e0fe39906f21f5270ffd2bf7bae74039.pdf)</ref><ref>Toward Understanding Key Estimation in Learning Robust Humanoid Locomotion. (https://arxiv.org/html/2403.05868v1)</ref>.
{{infobox company
| name = Deep Robotics
| country = China
| website_link = https://www.deeprobotics.cn/en
| robots = [[Wukong-IV]]
}}
== References ==
<references />
[[Category:Companies]]
a80aa36ff4237b276a0c1f9c08ed87b8ca2fbd31
569
568
2024-04-28T00:20:46Z
Stompy
14
wikitext
text/x-wiki
{{infobox company
| name = Deep Robotics
| country = China
| website_link = https://www.deeprobotics.cn/en
| robots = [[Wukong-IV]]
}}
Deep Robotics is a Chinese company known for its development of humanoid robots, including the [[Wukong-IV]] humanoid robot. As a leader in global quadruped robotics, Deep Robotics aims to serve the education, research, and entertainment markets with its advanced robotic technology<ref>DEEP Robotics showcases new X30 quadruped robot at GITEX GLOBAL 2023. (https://www.deeprobotics.cn/en/)</ref><ref>DEEP Robotics Launches Lite3 Quadruped Robot With Front Flipping ... (https://www.robotics247.com/article/deep_robotics_launches_lite3_quadruped_robot_front_flipping_mobility)</ref>.
The company is not just limited to humanoid robots, it's also involved in the design and application of quadrupedal robots, including state-of-the-art names like Lite3 and X30. These robots can maneuver through various obstacles, navigate stairs, and function in extreme weather and lighting conditions, with applications in industries such as power, factory, pipeline, and research fields<ref>X30 is a bionic quadruped robot that can traverse obstacles. (https://www.therobotreport.com/deep-robotics-launches-x30-quadruped-robot-to-meet-varied-industry-needs/)</ref>.
== Wukong-IV ==
[[Wukong-IV|Wukong-IV]], the humanoid robot developed by Deep Robotics, stands at 1.4 meters, weighs around 45 kg, and is actuated by electric motor joints. It features 6 degrees of freedom (DoF) on each leg and 4 DoFs on each arm. As a bionic humanoid robot, Wukong-IV's joint configuration and is capable of maintaining balance and stepping in place<ref>Hybrid Bipedal Locomotion Based on Reinforcement Learning and Heuristics. (https://pdfs.semanticscholar.org/f4d3/80f8e0fe39906f21f5270ffd2bf7bae74039.pdf)</ref><ref>Toward Understanding Key Estimation in Learning Robust Humanoid Locomotion. (https://arxiv.org/html/2403.05868v1)</ref>.
== References ==
<references />
[[Category:Companies]]
c1d18362e76ccefb4c80f6e44927b57096feeacb
570
569
2024-04-28T00:20:55Z
Stompy
14
wikitext
text/x-wiki
Deep Robotics is a Chinese company known for its development of humanoid robots, including the [[Wukong-IV]] humanoid robot. As a leader in global quadruped robotics, Deep Robotics aims to serve the education, research, and entertainment markets with its advanced robotic technology<ref>DEEP Robotics showcases new X30 quadruped robot at GITEX GLOBAL 2023. (https://www.deeprobotics.cn/en/)</ref><ref>DEEP Robotics Launches Lite3 Quadruped Robot With Front Flipping ... (https://www.robotics247.com/article/deep_robotics_launches_lite3_quadruped_robot_front_flipping_mobility)</ref>.
{{infobox company
| name = Deep Robotics
| country = China
| website_link = https://www.deeprobotics.cn/en
| robots = [[Wukong-IV]]
}}
The company is not just limited to humanoid robots, it's also involved in the design and application of quadrupedal robots, including state-of-the-art names like Lite3 and X30. These robots can maneuver through various obstacles, navigate stairs, and function in extreme weather and lighting conditions, with applications in industries such as power, factory, pipeline, and research fields<ref>X30 is a bionic quadruped robot that can traverse obstacles. (https://www.therobotreport.com/deep-robotics-launches-x30-quadruped-robot-to-meet-varied-industry-needs/)</ref>.
== Wukong-IV ==
[[Wukong-IV|Wukong-IV]], the humanoid robot developed by Deep Robotics, stands at 1.4 meters, weighs around 45 kg, and is actuated by electric motor joints. It features 6 degrees of freedom (DoF) on each leg and 4 DoFs on each arm. As a bionic humanoid robot, Wukong-IV's joint configuration and is capable of maintaining balance and stepping in place<ref>Hybrid Bipedal Locomotion Based on Reinforcement Learning and Heuristics. (https://pdfs.semanticscholar.org/f4d3/80f8e0fe39906f21f5270ffd2bf7bae74039.pdf)</ref><ref>Toward Understanding Key Estimation in Learning Robust Humanoid Locomotion. (https://arxiv.org/html/2403.05868v1)</ref>.
== References ==
<references />
[[Category:Companies]]
e2db3afc66107440a7c5d465fc0adb6c61b503f7
Wukong-IV
0
75
571
457
2024-04-28T00:21:21Z
Stompy
14
wikitext
text/x-wiki
Wukong-IV is an adult-size humanoid robot designed and built by the research team at [[Deep Robotics]]. It is actuated by electric motor joints. The robot has 6 degrees of freedom (DoF) on each leg and 4 DoFs on each arm.
{{infobox robot
| name = Wukong-IV
| organization = [[Deep Robotics]]
| height = 140 cm
| weight = 45 kg
| single_hand_payload
| two_hand_payload
| cost =
| video_link = https://www.youtube.com/watch?v=fbk4fYc6U14
}}
[[Category:Robots]]
[[Category:Stompy, Expand!]]
941fdbffe9abda8eecddd4420419833db77ddc9f
572
571
2024-04-28T00:21:48Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
I was unable to find information online about the cost of the Wukong-IV robot and its payload capabilities. However, we do have details about its height, weight, degrees of freedom, and that it is actuated by 21 electric motor joints.
Here is the updated information:
{{infobox robot
| name = Wukong-IV
| organization = [[Deep Robotics]]
| height = 140 cm
| weight = 45 kg
| single_hand_payload =
| two_hand_payload =
| cost =
| video_link = https://www.youtube.com/watch?v=fbk4fYc6U14
| dof = 6 (each leg), 4 (each arm)
| number_made =
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| status =
| speed =
}}
The Wukong-IV is an adult-size humanoid robot designed and built by the research team at [[Deep Robotics]]. It stands 1.4 meters tall and weighs 45 kg<ref>https://pdfs.semanticscholar.org/f4d3/80f8e0fe39906f21f5270ffd2bf7bae74039.pdf</ref>. This bionic humanoid robot is actuated by 21 electric motor joints. It has 6 degrees of freedom (DoF) on each leg and 4 DoFs on each arm<ref>https://www.mdpi.com/2072-666X/13/10/1688</ref>.
== References ==
<references />
[[Category:Robots]]
9ef0be8972f48ea64b27cf76b4333e87e07140bf
573
572
2024-04-28T00:22:15Z
Stompy
14
wikitext
text/x-wiki
{{infobox robot
| name = Wukong-IV
| organization = [[Deep Robotics]]
| height = 140 cm
| weight = 45 kg
| single_hand_payload =
| two_hand_payload =
| cost =
| video_link = https://www.youtube.com/watch?v=fbk4fYc6U14
| dof = 6 (each leg), 4 (each arm)
| number_made =
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| status =
| speed =
}}
The Wukong-IV is an adult-size humanoid robot designed and built by the research team at [[Deep Robotics]]. It stands 1.4 meters tall and weighs 45 kg<ref>https://pdfs.semanticscholar.org/f4d3/80f8e0fe39906f21f5270ffd2bf7bae74039.pdf</ref>. This bionic humanoid robot is actuated by 21 electric motor joints. It has 6 degrees of freedom (DoF) on each leg and 4 DoFs on each arm<ref>https://www.mdpi.com/2072-666X/13/10/1688</ref>.
== References ==
<references />
[[Category:Robots]]
51aa24b00f8a0adcae33c3d53c2ac6c51ed59448
574
573
2024-04-28T00:22:30Z
Stompy
14
wikitext
text/x-wiki
{{infobox robot
| name = Wukong-IV
| organization = [[Deep Robotics]]
| height = 140 cm
| weight = 45 kg
| single_hand_payload =
| two_hand_payload =
| cost =
| video_link = https://www.youtube.com/watch?v=fbk4fYc6U14
| dof = 21
| number_made =
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| status =
| speed =
}}
The Wukong-IV is an adult-size humanoid robot designed and built by the research team at [[Deep Robotics]]. It stands 1.4 meters tall and weighs 45 kg<ref>https://pdfs.semanticscholar.org/f4d3/80f8e0fe39906f21f5270ffd2bf7bae74039.pdf</ref>. This bionic humanoid robot is actuated by 21 electric motor joints. It has 6 degrees of freedom (DoF) on each leg and 4 DoFs on each arm<ref>https://www.mdpi.com/2072-666X/13/10/1688</ref>.
== References ==
<references />
[[Category:Robots]]
39c28ca5d4674bde7be419cffd55f458c2e1fa30
MenteeBot
0
106
575
412
2024-04-28T00:23:51Z
Stompy
14
wikitext
text/x-wiki
MenteeBot is building a humanoid robot called [[MenteeBot (Robot)]].
{{infobox company
| name = MenteeBot
| country = Israel
| website_link = https://www.menteebot.com/
| robots = [[MenteeBot (Robot)]]
}}
[[Category:Companies]]
[[Category:Stompy, Expand!]]
1f00ea3dd9423249b35e36f262714c7d6211e9c8
576
575
2024-04-28T00:24:15Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
The information provided about the company MenteeBot is very concise and leaves several aspects uncovered. In order to expand this article, I will use the Tavily search API to find more information that we can include.
uses(functions.search_web, { "query": "MenteeBot company" }).then(response => {
Update article:
MenteeBot is an Israel-based technology company that focuses on the design and construction of humanoid robots. Its primary creation is a humanoid robot also named [[MenteeBot (Robot)]]. The company aims to advance the development of robotics and human-robot interactions to foster a future where robots can efficiently assist humans in various tasks.
The company was founded in (Year Found) with a mission to innovate and surpass the existing boundaries in robotics. It has since made considerable advancements towards the creation of efficient humanoid robots.
{{infobox company
| name = MenteeBot
| country = Israel
| website_link = https://www.menteebot.com/
| robots = [[MenteeBot (Robot)]]
}}
== Overview ==
MenteeBot was built around the concept of creating humanoid robots that can help people in their daily lives. These robots are designed to learn from human behavior and adapt to their needs. The robot MenteeBot is the company's main product and the epitome of its technological expertise and vision.
== Notable Projects ==
The company's most noteworthy project is the [[MenteeBot (Robot)]], a humanoid robot model that has received recognition for its design and functionalities.
== See also ==
* [[Humanoid robots]]
* [[Robotics in Israel]]
[[Category:Companies]]
[[Category:Robot Manufacturers]]
[[Category:Technology Companies in Israel]]
== References ==
<references />
}).catch(error => {
// handle error
console.error(error);
});
03b7d70156d08cf42ca9e188bb5e6a06e2b3e412
577
576
2024-04-28T00:24:54Z
Stompy
14
Undo revision 576 by [[Special:Contributions/Stompy|Stompy]] ([[User talk:Stompy|talk]])
wikitext
text/x-wiki
MenteeBot is building a humanoid robot called [[MenteeBot (Robot)]].
{{infobox company
| name = MenteeBot
| country = Israel
| website_link = https://www.menteebot.com/
| robots = [[MenteeBot (Robot)]]
}}
[[Category:Companies]]
[[Category:Stompy, Expand!]]
1f00ea3dd9423249b35e36f262714c7d6211e9c8
581
577
2024-04-28T06:44:51Z
108.211.178.220
0
wikitext
text/x-wiki
MenteeBot is building a humanoid robot called [[MenteeBot (Robot)]].
{{infobox company
| name = MenteeBot
| country = Israel
| website_link = https://www.menteebot.com/
| robots = [[MenteeBot (Robot)]]
}}
[[Category:Companies]]
d6961a3aab5a6bf59fcdcd5d0ac420d885dfcc71
User:Allen12
2
149
578
2024-04-28T04:43:37Z
Allen12
15
Created page with "{{infobox person | name = Allen Wu | organization = [[K-Scale Labs]] | title = Employee }} [[Category: K-Scale Employees]]"
wikitext
text/x-wiki
{{infobox person
| name = Allen Wu
| organization = [[K-Scale Labs]]
| title = Employee
}}
[[Category: K-Scale Employees]]
41b3ccf41dcb4839bd1fdcb58fdf2f9c36da0cf6
H1
0
3
579
550
2024-04-28T06:44:15Z
108.211.178.220
0
wikitext
text/x-wiki
'''Unitree H1''' is a full-size universal humanoid robot developed by the [[Unitree]], a company known for its innovative robotic designs. The H1 is celebrated for its superior power performance capabilities and advanced powertrain technologies.
{{infobox robot
| name = H1
| organization = [[Unitree]]
| video_link = https://www.youtube.com/watch?v=83ShvgtyFAg
| cost = USD 150,000
| height = 180 cm
| weight = 47 kg
| speed = >3.3 m/s
| lift_force =
| battery_life =
| battery_capacity = 864 Wh
| purchase_link = https://shop.unitree.com/products/unitree-h1
| number_made =
| dof =
| status =
}}
== Specifications ==
The H1 robot stands approximately 180 cm tall and weighs around 47 kg, offering high mobility and physical capabilities. Some of the standout specifications of the H1 include:
* Maximum speed: Exceeds 3.3 meters per second, a benchmark in robot mobility.
* Weight: Approximately 47 kg.
* Maximum joint torque: 360 N.m.
* Battery capacity: 864 Wh, which is quickly replaceable, enhancing the robot's operational endurance.
== Features ==
The H1 incorporates advanced technologies to achieve its high functionality:
* Highly efficient powertrain for superior speed, power, and maneuverability.
* Equipped with high-torque joint motors developed by [[Unitree]] itself.
* 360° depth sensing capabilities combined with LIDAR and depth cameras for robust environmental perception.
== Uses and Applications ==
While detailed use cases of Unitree H1 are not extensively documented, the robot's build and capabilities suggest its suitability in complicated tasks requiring human-like dexterity and strength which might include industrial applications, complex terrain navigation, and interactive tasks.
f3c854cc88bf2e6afc40be54cdba94be07fdbc6f
612
579
2024-04-28T18:46:50Z
185.169.0.227
0
wikitext
text/x-wiki
'''Unitree H1''' is a full-size universal humanoid robot developed by the [[Unitree]], a company known for its innovative robotic designs. The H1 is celebrated for its superior power performance capabilities and advanced powertrain technologies.
{{infobox robot
| name = H1
| organization = [[Unitree]]
| video_link = https://www.youtube.com/watch?v=83ShvgtyFAg
| cost = USD 150,000
| height = 180 cm
| weight = 47 kg
| speed = >3.3 m/s
| lift_force =
| battery_life =
| battery_capacity = 864 Wh
| purchase_link = https://shop.unitree.com/products/unitree-h1
| number_made =
| dof =
| status =
}}
== Specifications ==
The H1 robot stands approximately 180 cm tall and weighs around 47 kg, offering high mobility and physical capabilities. Some of the standout specifications of the H1 include:
* Maximum speed: Exceeds 3.3 meters per second, a benchmark in robot mobility.
* Weight: Approximately 47 kg.
* Maximum joint torque: 360 N.m.
* Battery capacity: 864 Wh, which is quickly replaceable, enhancing the robot's operational endurance.
== Features ==
The H1 incorporates advanced technologies to achieve its high functionality:
* Highly efficient powertrain for superior speed, power, and maneuverability.
* Equipped with high-torque joint motors developed by [[Unitree]] itself.
* 360° depth sensing capabilities combined with LIDAR and depth cameras for robust environmental perception.
== Uses and Applications ==
While detailed use cases of Unitree H1 are not extensively documented, the robot's build and capabilities suggest its suitability in complicated tasks requiring human-like dexterity and strength which might include industrial applications, complex terrain navigation, and interactive tasks.
[[Category:Robots]]
b9a87389aef0a24ad6c4967f4066a8a4dc42d1af
CyberOne
0
126
580
439
2024-04-28T06:44:39Z
108.211.178.220
0
wikitext
text/x-wiki
CyberOne is a humanoid robot from [[Xiaomi]].
{{infobox robot
| name = CyberOne
| organization = [[Xiaomi]]
| height = 177 cm
| weight = 52 kg
| single_arm_payload = 1.5
| runtime =
| walk_speed =
| video_link = https://www.youtube.com/watch?v=yBmatGQ0giY
| cost =
}}
[[Category:Robots]]
[[Category:Stompy, Expand!]]
84807e69e223314d088276478494f64b777e220f
591
580
2024-04-28T07:04:44Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
CyberOne is a humanoid robot developed by the Chinese consumer electronics giant, Xiaomi. Unveiled in 2022 at a company event in Beijing by the founder, chairman, and CEO, Lei Jun, it is the newest member of Xiaomi's Cyber series, joining previously launched quadruped robots like CyberDog and CyberDog 2<ref>https://robotsguide.com/robots/cyberone</ref>.
== Specifications ==
This bipedal humanoid robot has a height of 177 cm and weight of 52 kg, with an arm span of 168 cm<ref>https://www.gadgets360.com/smart-home/news/xiaomi-cyberone-unveiled-specifications-humanoid-bionic-robot-oled-display-features-3249173</ref>. One of its distinct features is its ability to perceive 3D, recognize individuals, and respond to human emotions<ref>https://www.gadgets360.com/smart-home/news/xiaomi-cyberone-unveiled-specifications-humanoid-bionic-robot-oled-display-features-3249173</ref>. Furthermore, it boasts a top speed of 3.6 km/ph<ref>https://www.theverge.com/2022/8/16/23307808/xiaomi-cyberone-humanoid-robot-tesla-optimus-bot-specs-comparison</ref>.
== Pricing ==
The cost of CyberOne, if ever produced and made available for purchase, is estimated to be around 600,000 to 700,000 yuan<ref>https://robbreport.com/gear/electronics/xiaomi-humanoid-robot-cyberone-1234738597/</ref>.
{{infobox robot
| name = CyberOne
| organization = [[Xiaomi]]
| height = 177 cm
| weight = 52 kg
| single_arm_payload = 1.5
| runtime =
| walk_speed = 3.6 km/h
| video_link = https://www.youtube.com/watch?v=yBmatGQ0giY
| cost = 600,000 - 700,000 yuan (est.)
}}
[[Category:Robots]]
[[Category:Humanoid Robots]]
== References ==
<references />
cee5154088296004b3baa81505eaa75b5413f18a
599
591
2024-04-28T07:44:43Z
192.145.118.30
0
wikitext
text/x-wiki
CyberOne is a humanoid robot developed by the Chinese consumer electronics giant, Xiaomi. Unveiled in 2022 at a company event in Beijing by the founder, chairman, and CEO, Lei Jun, it is the newest member of Xiaomi's Cyber series, joining previously launched quadruped robots like CyberDog and CyberDog 2<ref>https://robotsguide.com/robots/cyberone</ref>.
{{infobox robot
| name = CyberOne
| organization = [[Xiaomi]]
| height = 177 cm
| weight = 52 kg
| single_arm_payload = 1.5
| runtime =
| walk_speed = 3.6 km/h
| video_link = https://www.youtube.com/watch?v=yBmatGQ0giY
| cost = 600,000 - 700,000 yuan (est.)
}}
== Specifications ==
This bipedal humanoid robot has a height of 177 cm and weight of 52 kg, with an arm span of 168 cm<ref>https://www.gadgets360.com/smart-home/news/xiaomi-cyberone-unveiled-specifications-humanoid-bionic-robot-oled-display-features-3249173</ref>. One of its distinct features is its ability to perceive 3D, recognize individuals, and respond to human emotions<ref>https://www.gadgets360.com/smart-home/news/xiaomi-cyberone-unveiled-specifications-humanoid-bionic-robot-oled-display-features-3249173</ref>. Furthermore, it boasts a top speed of 3.6 km/ph<ref>https://www.theverge.com/2022/8/16/23307808/xiaomi-cyberone-humanoid-robot-tesla-optimus-bot-specs-comparison</ref>.
== Pricing ==
The cost of CyberOne, if ever produced and made available for purchase, is estimated to be around 600,000 to 700,000 yuan<ref>https://robbreport.com/gear/electronics/xiaomi-humanoid-robot-cyberone-1234738597/</ref>.
[[Category:Robots]]
[[Category:Humanoid Robots]]
== References ==
<references />
965e64b1869ec3959dd35ae96059ca0801a13f0a
CL-1
0
125
582
468
2024-04-28T06:45:09Z
108.211.178.220
0
wikitext
text/x-wiki
The CL-1 is a humanoid robot developed by [[LimX Dynamics]]. It’s one of the few robots globally that can dynamically climb stairs based on real-time terrain perception. This advanced capability is attributed to [[LimX Dynamics]]’ motion control and AI algorithms, as well as its proprietary high-performing actuators and hardware system.
{{infobox robot
| name = CL-1
| organization = [[LimX Dynamics]]
| height =
| weight =
| single_arm_payload =
| runtime =
| walk_speed =
| video_link = https://www.youtube.com/watch?v=sihIDeJ4Hmk
| cost =
}}
[[Category:Robots]]
[[Category:Stompy, Expand!]]
482a95e0b400e977c1c8b3e2ebd9a278eac799ee
590
582
2024-04-28T07:04:15Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
Apologies, but the specific specifications such as height, weight, speed, runtime, and cost of the CL-1 robot developed by LimX Dynamics are not readily available. The search results have not yielded this specific information at this time. More specific information may be obtainable through direct inquiry to the manufacturer, LimX Dynamics.
Please note the CL-1 is one of the few robots globally that can dynamically climb stairs based on real-time terrain perception, an ability attributed to LimX Dynamics' advanced motion control and AI algorithms, proprietary high-performing actuators and a complex hardware system<ref>https://www.limxdynamics.com/en/humanoid-robot</ref><ref>https://medium.com/@limxdynamics/limx-dynamics-unveils-dynamic-testing-of-humanoid-robot-achieving-real-time-perceptive-stair-51d37b0cc6a5</ref>.
== Reference ==
<references />
dcbb71cccc47d9a6c50f1a915308de69e5c11f81
598
590
2024-04-28T07:44:10Z
192.145.118.30
0
Undo revision 590 by [[Special:Contributions/Stompy|Stompy]] ([[User talk:Stompy|talk]])
wikitext
text/x-wiki
The CL-1 is a humanoid robot developed by [[LimX Dynamics]]. It’s one of the few robots globally that can dynamically climb stairs based on real-time terrain perception. This advanced capability is attributed to [[LimX Dynamics]]’ motion control and AI algorithms, as well as its proprietary high-performing actuators and hardware system.
{{infobox robot
| name = CL-1
| organization = [[LimX Dynamics]]
| height =
| weight =
| single_arm_payload =
| runtime =
| walk_speed =
| video_link = https://www.youtube.com/watch?v=sihIDeJ4Hmk
| cost =
}}
[[Category:Robots]]
[[Category:Stompy, Expand!]]
482a95e0b400e977c1c8b3e2ebd9a278eac799ee
605
598
2024-04-28T08:04:09Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
The CL-1 is a humanoid robot developed by [[LimX Dynamics]]. It’s one of the few robots globally that can dynamically climb stairs based on real-time terrain perception. This advanced capability is attributed to [[LimX Dynamics]]’ motion control and AI algorithms, as well as its proprietary high-performing actuators and hardware system. Although precise specifications on the robot's height, weight, and cost are not publicly available at this time, it has been highlighted for its capabilities in navigating complex environments.<ref>https://www.therobotreport.com/limx-dynamics-shows-off-its-cl-1-humanoids-stair-climbing-abilities/</ref><ref>https://www.rockingrobots.com/limx-dynamics-shows-dynamic-testing-of-humanoid-robot-cl-1/</ref>
{{infobox robot
| name = CL-1
| organization = [[LimX Dynamics]]
| height = Data not available
| weight = Data not available
| single_arm_payload = Data not available
| runtime = Data not available
| walk_speed = Data not available
| video_link = https://www.youtube.com/watch?v=sihIDeJ4Hmk
| cost = Data not available
}}
== Capabilities ==
The CL-1 robot is noted for its dynamic stair-climbing capabilities based on real-time terrain perception, a marked achievement in the realm of robotic development. This enables the robot to navigate complex environments, maneuvering indoors and outdoors, and effectively completing complex tasks such as climbing stairs and traversing slopes.<ref>https://www.therobotreport.com/limx-dynamics-shows-off-its-cl-1-humanoids-stair-climbing-abilities/</ref>
== References ==
<references />
[[Category:Robots]]
cff9b3c6c17e0a1dcddfda3baac0d94fc7ef65ee
Toyota Research Institute
0
138
583
476
2024-04-28T06:45:18Z
108.211.178.220
0
wikitext
text/x-wiki
Toyota Research Institute is building humanoid robot called [[T-HR3]].
{{infobox company
| name = Toyota Research Institute
| country = Japan
| website_link = https://global.toyota/
| robots = [[T-HR3]]
}}
[[Category:Companies]]
[[Category:Stompy, Expand!]]
ea48212a16192cb22b976409e8f758ef3fe48e12
596
583
2024-04-28T07:07:42Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
== Toyota Research Institute ==
Toyota Research Institute (TRI) is a subsidiary of the multinational automotive manufacturer, Toyota, and is based in Japan. The institute is exploring various facets of robotics and is responsible for building Toyota's third generation humanoid robot, the [[T-HR3]]<ref>[https://spectrum.ieee.org/toyota-gets-back-into-humanoid-robots-with-new-thr3 Toyota Gets Back Into Humanoid Robots With New T-HR3]</ref>.
== The T-HR3 Robot ==
The T-HR3 is a 1.5-meter humanoid robot designed to interact safely and naturally with its surroundings, and with the person controlling it<ref>[https://mag.toyota.co.uk/t-hr3-toyota-remote-control-robot/ T-HR3 - Toyota's new remote-controlled robot]</ref>. This robot weighs 75 kilograms and features 32 degrees of torque-controlled motion<ref>[https://spectrum.ieee.org/toyota-gets-back-into-humanoid-robots-with-new-thr3 Toyota Gets Back Into Humanoid Robots With New T-HR3]</ref>. It possesses some core capabilities: flexible joint control, whole-body coordination and balance control, and real remote maneuvering – thanks to its Torque Servo Module. These features enable the robot to control the force of contact it makes with any individual or object in its environment, to retain balance even when it collides with objects, and to mirror user movements with seamless and intuitive control<ref>[https://mag.toyota.co.uk/t-hr3-toyota-remote-control-robot/ T-HR3 - Toyota's new remote-controlled robot]</ref>.
The development of the T-HR3 is championed by a research and development group led by Tomohisa Moridaira, based at Toyota's Tokyo Head Office.<ref>[https://global.toyota/en/newsroom/corporate/30609642.html Why is Toyota Developing Humanoid Robots?]</ref>
== References ==
<references />
{{infobox company
| name = Toyota Research Institute
| country = Japan
| website_link = https://global.toyota/
| robots = [[T-HR3]]
}}
[[Category:Companies]]
[[Category:Humanoid Robots]]
cd65073b53a1bb212f5c5ca29ca19c4418189ce7
603
596
2024-04-28T07:46:46Z
192.145.118.30
0
wikitext
text/x-wiki
Toyota Research Institute (TRI) is a subsidiary of the multinational automotive manufacturer, Toyota, and is based in Japan. The institute is exploring various facets of robotics and is responsible for building Toyota's third generation humanoid robot, the [[T-HR3]]<ref>[https://spectrum.ieee.org/toyota-gets-back-into-humanoid-robots-with-new-thr3 Toyota Gets Back Into Humanoid Robots With New T-HR3]</ref>.
{{infobox company
| name = Toyota Research Institute
| country = Japan
| website_link = https://global.toyota/
| robots = [[T-HR3]]
}}
== The T-HR3 Robot ==
The T-HR3 is a 1.5-meter humanoid robot designed to interact safely and naturally with its surroundings, and with the person controlling it<ref>[https://mag.toyota.co.uk/t-hr3-toyota-remote-control-robot/ T-HR3 - Toyota's new remote-controlled robot]</ref>. This robot weighs 75 kilograms and features 32 degrees of torque-controlled motion<ref>[https://spectrum.ieee.org/toyota-gets-back-into-humanoid-robots-with-new-thr3 Toyota Gets Back Into Humanoid Robots With New T-HR3]</ref>. It possesses some core capabilities: flexible joint control, whole-body coordination and balance control, and real remote maneuvering – thanks to its Torque Servo Module. These features enable the robot to control the force of contact it makes with any individual or object in its environment, to retain balance even when it collides with objects, and to mirror user movements with seamless and intuitive control<ref>[https://mag.toyota.co.uk/t-hr3-toyota-remote-control-robot/ T-HR3 - Toyota's new remote-controlled robot]</ref>.
The development of the T-HR3 is championed by a research and development group led by Tomohisa Moridaira, based at Toyota's Tokyo Head Office.<ref>[https://global.toyota/en/newsroom/corporate/30609642.html Why is Toyota Developing Humanoid Robots?]</ref>
== References ==
<references />
[[Category:Companies]]
[[Category:Humanoid Robots]]
b8a430ff64fb916e038bb8e5ae58a442e2cd875b
Punyo
0
137
584
471
2024-04-28T06:45:31Z
108.211.178.220
0
wikitext
text/x-wiki
Punyo is a soft robot developed by the [[Toyota Research Institute]] (TRI) to revolutionize whole-body manipulation research. Unlike traditional robots that primarily use hands for manipulation, Punyo employs its arms and chest. The robot is designed to help with everyday tasks, such as lifting heavy objects or closing a drawer.
{{infobox robot
| name = Punyo
| organization = [[Toyota Research Institute]]
| height =
| weight =
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=FY-MD4gteeE
| cost =
}}
[[Category:Robots]]
[[Category:Stompy, Expand!]]
c30f8b691f91c43e54508188502f4dfd70262978
593
584
2024-04-28T07:05:42Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
I wasn't able to find the specific height, weight, or cost of the Punyo robot. However, I was able to find general information about the robot, which employed whole-body manipulation, using its arms and chest, to assist with everyday tasks<ref>https://newatlas.com/robotics/toyota-punyo-humanoid-soft-robot/</ref><ref>https://punyo.tech/</ref>.
The robot platform is constructed with compliant materials and tactile mechanisms that increase its ability to safely handle objects, especially large and heavy ones<ref>https://medium.com/toyotaresearch/meet-punyo-tris-soft-robot-for-whole-body-manipulation-research-949c934ac3d8</ref><ref>https://spectrum.ieee.org/humanoid-robot-tri-punyo</ref>.
Here is the updated information for the Punyo robot:
{{infobox robot
| name = Punyo
| organization = [[Toyota Research Institute]]
| height = Unknown
| weight = Unknown
| two_hand_payload = Unknown
| video_link = https://www.youtube.com/watch?v=FY-MD4gteeE
| cost = Unknown
}}
== Article ==
Punyo is a soft robot developed by the [[Toyota Research Institute]] (TRI) to revolutionize whole-body manipulation research. Unlike traditional robots that primarily use hands for manipulation, Punyo employs its arms and chest. The robot is designed to help with everyday tasks, such as lifting heavy objects or closing a drawer<ref>https://newatlas.com/robotics/toyota-punyo-humanoid-soft-robot/</ref><ref>https://punyo.tech/</ref>.
== Description ==
Punyo's platform designs with compliant materials and employs tactile mechanisms that increase its ability to handle objects effectively and safely, especially large and heavy ones<ref>https://medium.com/toyotaresearch/meet-punyo-tris-soft-robot-for-whole-body-manipulation-research-949c934ac3d8</ref><ref>https://spectrum.ieee.org/humanoid-robot-tri-punyo</ref>.
[[Category:Robots]]
== References ==
<references />
b034b3a39ff1ffbbbdb3d66d695a8b480ee8b340
601
593
2024-04-28T07:45:56Z
192.145.118.30
0
wikitext
text/x-wiki
I wasn't able to find the specific height, weight, or cost of the Punyo robot. However, I was able to find general information about the robot, which employed whole-body manipulation, using its arms and chest, to assist with everyday tasks<ref>https://newatlas.com/robotics/toyota-punyo-humanoid-soft-robot/</ref><ref>https://punyo.tech/</ref>.
The robot platform is constructed with compliant materials and tactile mechanisms that increase its ability to safely handle objects, especially large and heavy ones<ref>https://medium.com/toyotaresearch/meet-punyo-tris-soft-robot-for-whole-body-manipulation-research-949c934ac3d8</ref><ref>https://spectrum.ieee.org/humanoid-robot-tri-punyo</ref>.
Here is the updated information for the Punyo robot:
{{infobox robot
| name = Punyo
| organization = [[Toyota Research Institute]]
| height =
| weight =
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=FY-MD4gteeE
| cost =
}}
== Article ==
Punyo is a soft robot developed by the [[Toyota Research Institute]] (TRI) to revolutionize whole-body manipulation research. Unlike traditional robots that primarily use hands for manipulation, Punyo employs its arms and chest. The robot is designed to help with everyday tasks, such as lifting heavy objects or closing a drawer<ref>https://newatlas.com/robotics/toyota-punyo-humanoid-soft-robot/</ref><ref>https://punyo.tech/</ref>.
== Description ==
Punyo's platform designs with compliant materials and employs tactile mechanisms that increase its ability to handle objects effectively and safely, especially large and heavy ones<ref>https://medium.com/toyotaresearch/meet-punyo-tris-soft-robot-for-whole-body-manipulation-research-949c934ac3d8</ref><ref>https://spectrum.ieee.org/humanoid-robot-tri-punyo</ref>.
== References ==
<references />
[[Category:Robots]]
f3102b97c1415e924b918a5cddde764a19350801
PX5
0
111
585
417
2024-04-28T06:45:43Z
108.211.178.220
0
wikitext
text/x-wiki
PX5 is a humanoid robot from [[Xpeng]].
{{infobox robot
| name = PX5
| organization = [[Xpeng]]
| height =
| weight =
| single_hand_payload
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=BNSZ8Fwcd20
| cost
}}
[[Category:Robots]]
[[Category:Stompy, Expand!]]
cd88c74d1ebb4fcadf61b17b471cbe9bb4565b80
594
585
2024-04-28T07:07:04Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
The PX5 is a humanoid robot developed by Xpeng, unveiled for the first time during Xpeng Motors' Tech Day in 2023.<ref>https://kr-asia.com/xpeng-motors-unveils-px5-humanoid-robot-underlining-its-vision-for-the-future</ref> The robot stands approximately 1.5 meters in height and is able to navigate different terrain and handle objects with precision, demonstrating remarkable stability.<ref>https://technode.com/2023/10/25/xpeng-tech-day-2023-first-mpv-self-driving-timeline-flying-cars-and-humanoid-robots/</ref> Constructed with a silver-white color scheme, the PX5 is also resistant to shock.<ref>https://kr-asia.com/xpeng-motors-unveils-px5-humanoid-robot-underlining-its-vision-for-the-future</ref>
{{infobox robot
| name = PX5
| organization = [[Xpeng]]
| height = 1.5 meters
| weight =
| video_link = https://www.youtube.com/watch?v=BNSZ8Fwcd20
| cost =
}}
== Development ==
Xpeng Robotics, an ecosystem company of Xpeng, specializing in smart robots, revealed the PX5. The company, which was founded in 2016, innovates in areas like robot powertrain, locomotion control, robot autonomy, robot interaction, and artificial intelligence contributing to a shared mission of exploring future mobility solutions.<ref>https://www.pxing.com/en/about</ref>
== Design and Capabilities ==
The PX5 bears a striking silver-white finish and exhibits resistance to shock. Its capability to navigate through different terrains and handle handheld objects, such as a pen, with exceptional stability has been highlighted in demonstrations.<ref>https://kr-asia.com/xpeng-motors-unveils-px5-humanoid-robot-underlining-its-vision-for-the-future</ref> <ref>https://technode.com/2023/10/25/xpeng-tech-day-2023-first-mpv-self-driving-timeline-flying-cars-and-humanoid-robots/</ref>
[[Category:Robots]]
[[Category:Xpeng Robots]]
== References ==
<references />
c6b8a0a6adb3aac63cce35a967293da314182ab0
Main Page
0
1
586
419
2024-04-28T06:46:02Z
108.211.178.220
0
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|}
152351df7d24c1b25a6bc27068461d60c15ebcb4
608
586
2024-04-28T16:23:39Z
Modeless
7
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|}
9f488d3d39b8bb0b0469f8cf9ca556cbb3e52158
Nadia
0
109
587
415
2024-04-28T06:46:19Z
108.211.178.220
0
wikitext
text/x-wiki
Nadia is a humanoid robot from [[IHMC, Boardwalk Robotics]].
{{infobox robot
| name = Nadia
| organization = [[IHMC, Boardwalk Robotics]]
| height =
| weight =
| single_hand_payload
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=uTmUfOc7r_s
| cost
}}
[[Category:Robots]]
[[Category:Stompy, Expand!]]
20098c581d4658fba1d2676af685464e23a15abe
592
587
2024-04-28T07:05:10Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
Nadia is a humanoid robot developed by the Institute for Human & Machine Cognition (IHMC) in collaboration with Boardwalk Robotics<ref>[http://robots.ihmc.us/nadia Nadia Humanoid — IHMC Robotics Lab]</ref>. This advanced humanoid robot, named Nadia, is designed with a high power-to-weight ratio and a large range of motion, characteristics which provide it with exceptional mobility<ref>[https://www.boardwalkrobotics.com/Nadia.html Nadia Humanoid Robot - Boardwalk Robotics]</ref>. Though specific information regarding height, weight, and payload capacities have not been explicitly stated, Nadia reportedly has one of the highest ranges of motion across its 29 joints of any humanoid robot globally<ref>[https://www.boardwalkrobotics.com/Nadia.html Nadia Humanoid Robot - Boardwalk Robotics]</ref>.
{{infobox robot
| name = Nadia
| organization = [[IHMC, Boardwalk Robotics]]
| height = Not available
| weight = Not available
| single_hand_payload = Not available
| two_hand_payload = Not available
| video_link = https://www.youtube.com/watch?v=uTmUfOc7r_s
| cost = Not available
}}
== Design and Capabilities ==
The Nadia humanoid robot's design encompasses a high power-to-weight ratio, contributing to its significant mobility potential. It stands out due to its extensive range of motion, facilitated by its architecture of 29 joints<ref>[https://www.boardwalkrobotics.com/Nadia.html Nadia Humanoid Robot - Boardwalk Robotics]</ref>. These design features enable Nadia to adapt to and function within urban environments, aligning with the project's goal of facilitating semi-autonomous behaviors.
Built with the same intelligence that powers the IHMC's DRC-Atlas robot, Nadia boasts real-time perception, compliant locomotion, autonomous footstep placement, and dexterous VR-teleoperated manipulation<ref>[https://www.boardwalkrobotics.com/Nadia.html Nadia Humanoid Robot - Boardwalk Robotics]</ref>.
== Research and Development ==
The development of Nadia is a collaborative project by the IHMC Robotics Lab and Boardwalk Robotics. The research team aims to produce a next-generation humanoid, capable of executing more perilous tasks while retaining high mobility<ref>[https://www.bbc.com/news/world-us-canada-67722014 A VR-controlled robot that throws boxing punches - BBC]</ref>. This development project positions Nadia as one of the most mobile ground robots designed in-house at IHMC in nearly a decade<ref>[https://www.ihmc.us/news20221005/ Video shows progress of IHMC humanoid robot Nadia]</ref>.
== References ==
<references />
[[Category:Robots]]
46cf2b83efd2741aecdad111bdebba3a37205110
600
592
2024-04-28T07:45:19Z
192.145.118.30
0
wikitext
text/x-wiki
Nadia is a humanoid robot developed by the Institute for Human & Machine Cognition (IHMC) in collaboration with Boardwalk Robotics<ref>[http://robots.ihmc.us/nadia Nadia Humanoid — IHMC Robotics Lab]</ref>. This advanced humanoid robot, named Nadia, is designed with a high power-to-weight ratio and a large range of motion, characteristics which provide it with exceptional mobility<ref>[https://www.boardwalkrobotics.com/Nadia.html Nadia Humanoid Robot - Boardwalk Robotics]</ref>. Though specific information regarding height, weight, and payload capacities have not been explicitly stated, Nadia reportedly has one of the highest ranges of motion across its 29 joints of any humanoid robot globally<ref>[https://www.boardwalkrobotics.com/Nadia.html Nadia Humanoid Robot - Boardwalk Robotics]</ref>.
{{infobox robot
| name = Nadia
| organization = [[IHMC, Boardwalk Robotics]]
| height =
| weight =
| single_hand_payload =
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=uTmUfOc7r_s
| cost =
}}
== Design and Capabilities ==
The Nadia humanoid robot's design encompasses a high power-to-weight ratio, contributing to its significant mobility potential. It stands out due to its extensive range of motion, facilitated by its architecture of 29 joints<ref>[https://www.boardwalkrobotics.com/Nadia.html Nadia Humanoid Robot - Boardwalk Robotics]</ref>. These design features enable Nadia to adapt to and function within urban environments, aligning with the project's goal of facilitating semi-autonomous behaviors.
Built with the same intelligence that powers the IHMC's DRC-Atlas robot, Nadia boasts real-time perception, compliant locomotion, autonomous footstep placement, and dexterous VR-teleoperated manipulation<ref>[https://www.boardwalkrobotics.com/Nadia.html Nadia Humanoid Robot - Boardwalk Robotics]</ref>.
== Research and Development ==
The development of Nadia is a collaborative project by the IHMC Robotics Lab and Boardwalk Robotics. The research team aims to produce a next-generation humanoid, capable of executing more perilous tasks while retaining high mobility<ref>[https://www.bbc.com/news/world-us-canada-67722014 A VR-controlled robot that throws boxing punches - BBC]</ref>. This development project positions Nadia as one of the most mobile ground robots designed in-house at IHMC in nearly a decade<ref>[https://www.ihmc.us/news20221005/ Video shows progress of IHMC humanoid robot Nadia]</ref>.
== References ==
<references />
[[Category:Robots]]
aecd83d6eb23b9871acad3520b0184e3d5a1a5c3
Westwood Robotics
0
116
588
429
2024-04-28T06:46:43Z
108.211.178.220
0
wikitext
text/x-wiki
Westwood Robotics is building a humanoid robot called [[THEMIS]].
{{infobox company
| name = Westwood Robotics
| country = USA
| website_link = https://www.westwoodrobotics.io/
| robots = [[THEMIS]]
}}
[[Category:Companies]]
[[Category:Stompy, Expand!]]
1b5ce03e8e2e32f9c6139992abda2505e303d0bf
597
588
2024-04-28T07:08:07Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
Westwood Robotics, an American company, is developing a humanoid robot, THEMIS. Inspired by the company's aspiration to embed versatile humanoid robots into our daily life, THEMIS demonstrates Westwood's commitment to developing robots capable of executing complex tasks with human-like grace and aptitude.
THEMIS embodies Westwood's vision of a future where humanoid robots interact safely with their environment while displaying dynamic movements comparable to human motions. Designed to perform general-purpose tasks, THEMIS is optimized for dynamic movements, including walking, running, and even parkour maneuvers.
Founded in 2018, Westwood Robotics was established by a collection of enthusiastic roboticists hailing from the Robotics & Mechanisms Laboratory (RoMeLa). Driven by a mission to bring robots closer to people, the organization seeks to improve the interaction between humans and robots, ensuring safer interactions and more versatile applications.
== THEMIS Development ==
Westwood Robotics' THEMIS is an advanced humanoid robot engineered to interact with its environment safely while executing an extraordinary range of motions and maneuvers. Utilizing state-of-the-art technology, it is designed to perform tasks with a degree of precision and finesse that matches or surpasses human capabilities. Its design allows for complex and dynamic maneuvers including walking, running, and intricate parkour movements, testaments to the firm's commitment to revolutionize the potential applications of humanoid robotics[[1]].
== History ==
Westwood Robotics was founded in 2018 by a passionate group of roboticists from the Robotics & Mechanisms Laboratory (RoMeLa). With their vast expertise in robotics, they are dedicated to developing humanoid robots that can interact safely with humans and their environment. Their mission to bring robots closer to people is succinctly encapsulated in their flagship product, THEMIS[[2]].
== See Also ==
* [[Science and technology in the United States]]
* [[List of robot companies]]
== References ==
<references>
<ref name="MicroStrainCaseStudy">[https://www.microstrain.com/blog/case-study-pushing-the-boundaries-of-humanoid-robot-performance-with-westwood Case Study: Pushing the Boundaries of Humanoid Robot Performance with Westwood Robotics THEMIS]</ref>
<ref name="LinkedInProfile">[https://www.linkedin.com/company/westwoodrobotics Westwood Robotics Corporation LinkedIn Profile]</ref>
</references>
[[Category:Companies]]
[[Category:Robotics Companies]]
[[Category:Humanoid Robots]]
[[Category:Science and technology in the United States]]
5f034d6daa0b1d8c36aa935b3ff26a35f2d55741
604
597
2024-04-28T07:47:04Z
192.145.118.30
0
Undo revision 597 by [[Special:Contributions/Stompy|Stompy]] ([[User talk:Stompy|talk]])
wikitext
text/x-wiki
Westwood Robotics is building a humanoid robot called [[THEMIS]].
{{infobox company
| name = Westwood Robotics
| country = USA
| website_link = https://www.westwoodrobotics.io/
| robots = [[THEMIS]]
}}
[[Category:Companies]]
[[Category:Stompy, Expand!]]
1b5ce03e8e2e32f9c6139992abda2505e303d0bf
607
604
2024-04-28T08:04:48Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
== Westwood Robotics ==
Westwood Robotics is an innovative company based in the USA, dedicated to bringing robots closer to people. Founded in 2018 by a team of passionate roboticists from the Robotics & Mechanisms Laboratory (RoMeLa) at University of California, Los Angeles (UCLA), Westwood Robotics is renowned for its advanced robotic applications.<ref>https://www.linkedin.com/company/westwoodrobotics</ref> The company is known for the development of diverse robotics products, among which is the THEMIS, an exceptional humanoid robot.
== THEMIS ==
THEMIS is a humanoid robot developed by Westwood Robotics. The details about the product are not clear from available online sources.
== Other Products ==
Apart from THEMIS, Westwood Robotics has been reported to have developed and introduced other products which include a kid-size humanoid robot used as an open-platform for research and education, as well as the BEAR series actuators that are known for exceptional live compliance and unparalleled dynamic performance.<ref>https://www.westwoodrobotics.io/</ref>
{{infobox company
| name = Westwood Robotics
| country = USA
| website_link = https://www.westwoodrobotics.io/
| robots = [[THEMIS]]
}}
[[Category:Companies]]
== References ==
<references />
c20b21639a988f54b21fd1e95fc641dfb9488439
THEMIS
0
115
589
428
2024-04-28T06:46:51Z
108.211.178.220
0
wikitext
text/x-wiki
THEMIS is a humanoid robot from [[Westwood Robotics]].
{{infobox robot
| name = THEMIS
| organization = [[Westwood Robotics]]
| height = 142.2 cm
| weight = 39 kg
| video_link = https://www.youtube.com/watch?v=yt4mHwAl9cc
| cost =
}}
[[Category:Robots]]
[[Category:Stompy, Expand!]]
90a63767a105f7059c3e345706b28ba6b2e78797
595
589
2024-04-28T07:07:17Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
The cost information for the THEMIS robot from Westwood Robotics is not readily available in the public domain. This could be for a number of reasons such as the robot's use in specific and restricted research domains, or the fluctuation of cost parameters based on customizations or updates to the model. Consequently, the cost is not specified in the infobox for the robot.
The revised infobox would be:
{{infobox robot
| name = THEMIS
| organization = [[Westwood Robotics]]
| height = 142.2 cm
| weight = 39 kg
| video_link = https://www.youtube.com/watch?v=yt4mHwAl9cc
| cost = Information not available in the public domain
}}
[[Category:Robots]]
88bc12827aff018d841a68c91e25cc15d8705817
602
595
2024-04-28T07:46:21Z
192.145.118.30
0
Undo revision 595 by [[Special:Contributions/Stompy|Stompy]] ([[User talk:Stompy|talk]])
wikitext
text/x-wiki
THEMIS is a humanoid robot from [[Westwood Robotics]].
{{infobox robot
| name = THEMIS
| organization = [[Westwood Robotics]]
| height = 142.2 cm
| weight = 39 kg
| video_link = https://www.youtube.com/watch?v=yt4mHwAl9cc
| cost =
}}
[[Category:Robots]]
[[Category:Stompy, Expand!]]
90a63767a105f7059c3e345706b28ba6b2e78797
606
602
2024-04-28T08:04:29Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
While the search results have shown information about a robot named THEMIS, it seems to be about the THeMIS unmanned ground vehicle (UGV) developed by the company Milrem Robotics, as seen on the Milrem Robotics website<ref>[Milrem Robotics Defence](https://milremrobotics.com/defence/)</ref>. The search results do not provide additional information about the THEMIS humanoid robot from Westwood Robotics.
Based on the initial information provided, we can update the infobox as follows (note that the cost field remains unfilled):
{{infobox robot
| name = THEMIS
| organization = [[Westwood Robotics]]
| video_link = https://www.youtube.com/watch?v=yt4mHwAl9cc
| height = 142.2 cm
| weight = 39 kg
| cost =
| speed =
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof =
| status =
}}
Further details about the robot's costs, speed, lift force, number made, degrees of freedom, battery life/capacity, and status are needed to complete the infobox. Until we are able to elaborate more on the THEMIS humanoid robot, we cannot finalize this article.
== References ==
<references />
[[Category:Robots]]
2b3fe98a0a7d9d60acc83f54e83c388b39feb11c
Tiangong
0
150
609
2024-04-28T16:27:08Z
Modeless
7
Created page with "Sharing the name of the Chinese space station for some reason, this robot is claimed to be the "first" running electric humanoid. Unitree H1 already runs faster though. Here i..."
wikitext
text/x-wiki
Sharing the name of the Chinese space station for some reason, this robot is claimed to be the "first" running electric humanoid. Unitree H1 already runs faster though. Here is an article on the announcement with video: https://www.maginative.com/article/meet-tiangong-chinas-full-size-electric-running-humanoid-robot/
00b08c40be9106acae944d05a6af6e27f5291755
Astribot S1
0
151
610
2024-04-28T18:46:11Z
185.169.0.227
0
Created page with "Astribot is a Chinese bot from a subsidiary of [[Stardust Intelligence]]. [[Category:Stompy, Expand!]]"
wikitext
text/x-wiki
Astribot is a Chinese bot from a subsidiary of [[Stardust Intelligence]].
[[Category:Stompy, Expand!]]
c21f39c97f25f51c0953aebeb1d6ed6ecaa3d3f1
611
610
2024-04-28T18:46:34Z
185.169.0.227
0
wikitext
text/x-wiki
Astribot is a Chinese bot from a subsidiary of [[Stardust Intelligence]].
[[Category:Robots]]
[[Category:Stompy, Expand!]]
e52d4aebea0ed7651abddcfbd5c2655bc52ab48d
Main Page
0
1
613
608
2024-04-28T18:47:12Z
185.169.0.227
0
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
|
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|}
1d04e36953efdcbf23c1abb2a170bd8fdff9b84c
627
613
2024-04-29T01:52:02Z
194.195.93.80
0
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
|
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|}
7f2b254cfcc3b6dc1bf50eb66c7aedfb437f40a8
629
627
2024-04-29T04:33:49Z
108.211.178.220
0
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|}
70ff00969755912c7f1f1bc940f7231e3076f27a
632
629
2024-04-29T16:12:19Z
38.108.138.43
0
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
! License
! Links
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
| [[Open Source]]
| [[https://github.com/assadollahi/kayra]]
|}
8d6bbaebefb0ac5c5eadef1ac9edcfa99d2e054b
634
632
2024-04-29T17:41:27Z
185.187.168.105
0
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|}
95bcdfc7b05ed863b21d0b94f5ddba47997bf06a
Astribot S1
0
151
614
611
2024-04-28T19:04:06Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
== Overview ==
Astribot, also known as the S1 robot, is a product of the Chinese tech company AstriBot Corporation, a subsidiary of Stardust Intelligence. The robot's name is derived from the Latin proverb "Ad astra per aspera," which translates to "through the arduous journey, to reach the stardust."<ref>https://www.maginative.com/article/stardust-intelligences-impressive-s1-robot-demo-raises-eyebrows/</ref>
The Astribot showcases significant developments in the field of humanoid robotics. It features impressive performance specifications such as a movement speed of up to 10 meters per second and a lift ability up to 10 kilograms, marking a notable stride in AI-controlled humanoid robots.<ref>https://elblog.pl/2024/04/28/astribot-corporations-s1-robot-promises-swift-and-skilled-assistance/</ref> Astribot's parent company, Stardust Intelligence, was founded by Lai Jie, who also contributed to Tencent Robotics Laboratory, Baidu, and Hong Kong Polytechnic.
{{infobox robot
| name = Astribot
| organization = AstriBot Corporation
| video_link =
| cost =
| height =
| weight =
| speed = 10 m/s
| lift_force = 10 kg
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof =
| status =
}}
== Capabilities ==
Astribot S1 is described as an 'all-purpose' home robot. While the full extent of its capabilities are still being unveiled, it is said to have the potential for tasks like folding clothes, cooking meals, and cleaning rooms.<ref>https://businessnewsforkids.substack.com/p/85-a-super-cool-home-robot</ref>
== References ==
<references />
[[Category:Robots]]
[[Category:Chinese Robots]]
[[Category:Stardust Intelligence]]
b856fac9e84a7a6403844483b813afd311f744d1
615
614
2024-04-28T23:58:19Z
194.195.93.78
0
wikitext
text/x-wiki
== Overview ==
Astribot, also known as the S1 robot, is a product of the Chinese tech company [[AstriBot Corporation]], a subsidiary of Stardust Intelligence. The robot's name is derived from the Latin proverb "Ad astra per aspera," which translates to "through the arduous journey, to reach the stardust."<ref>https://www.maginative.com/article/stardust-intelligences-impressive-s1-robot-demo-raises-eyebrows/</ref>
The Astribot showcases significant developments in the field of humanoid robotics. It features impressive performance specifications such as a movement speed of up to 10 meters per second and a lift ability up to 10 kilograms, marking a notable stride in AI-controlled humanoid robots.<ref>https://elblog.pl/2024/04/28/astribot-corporations-s1-robot-promises-swift-and-skilled-assistance/</ref> Astribot's parent company, Stardust Intelligence, was founded by Lai Jie, who also contributed to Tencent Robotics Laboratory, Baidu, and Hong Kong Polytechnic.
{{infobox robot
| name = Astribot
| organization = AstriBot Corporation
| video_link =
| cost =
| height =
| weight =
| speed = 10 m/s
| lift_force = 10 kg
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof =
| status =
}}
== Capabilities ==
Astribot S1 is described as an 'all-purpose' home robot. While the full extent of its capabilities are still being unveiled, it is said to have the potential for tasks like folding clothes, cooking meals, and cleaning rooms.<ref>https://businessnewsforkids.substack.com/p/85-a-super-cool-home-robot</ref>
== References ==
<references />
[[Category:Robots]]
[[Category:Stompy, Expand!]]
df461eb4bf551f5ec39fe06d07524fcf17248ac2
619
615
2024-04-29T00:04:41Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
The Astribot S1 boasts an array of features that sets it apart from its predecessors. In its design, it intended to operate without the need for human guidance amidst its tasks, further emphasizing its autonomous ability<ref>https://www.geeky-gadgets.com/astribot-s1-ai-humanoid-robot/</ref>. The robot is known to assist in a variety of housework, including preparing drinks and doing other household chores such as ironing and folding<ref>https://www.msn.com/en-us/news/technology/astribot-ai-powered-humanoid-torso-can-prepare-drinks-help-with-housework/ar-AA1nP0ah</ref>.
Key specifications of the Astribot S1 include a top speed of 10 meters per second, which is faster than the average adult man, a payload capacity of 10 kg per arm and a total of 7 degrees of freedom per arm. These arms exhibit a range of motion rivaling that of a human limb<ref>https://digialps.com/meet-the-incredibly-fast-astribot-s1-the-humanoid-robot-that-learns-at-1x-speed-without-any-help/</ref>.
{{infobox robot
| name = Astribot
| organization = AstriBot Corporation
| video_link =
| cost =
| height =
| weight =
| speed = 10 m/s
| lift_force = 10 kg
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof = 7 (per arm)
| status =
}}
== Capabilities ==
Astribot S1 is described as an 'all-purpose' home robot. While the full extent of its capabilities are still being unveiled, it is said to have the potential for tasks like folding clothes, cooking meals, and cleaning rooms<ref>https://businessnewsforkids.substack.com/p/85-a-super-cool-home-robot</ref>. In addition to these home tasks, the Astribot S1 is also adept at performing intricate tasks such as opening bottles and pouring drinks, which clearly illustrates the robot's well-designed dexterity<ref>https://www.msn.com/en-us/news/technology/astribot-ai-powered-humanoid-torso-can-prepare-drinks-help-with-housework/ar-AA1nP0ah</ref>.
== References ==
<references />
[[Category:Robots]]
38c35a87800eb4174cec1590a041c5f1569a931d
AstriBot Corporation
0
152
616
2024-04-28T23:58:39Z
194.195.93.78
0
Created page with "Creators of the [[Astribot S1]]. [[Category:Stompy, Expand!]]"
wikitext
text/x-wiki
Creators of the [[Astribot S1]].
[[Category:Stompy, Expand!]]
01587f1632781104c70ce791d728d9514c942c3f
617
616
2024-04-28T23:58:57Z
194.195.93.78
0
wikitext
text/x-wiki
Creators of the [[Astribot S1]].
[[Category:Companies]]
[[Category:Stompy, Expand!]]
2f1a57e6682f5a60a7ea4e8ec9ece55d513af156
618
617
2024-04-29T00:04:12Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
The Astribot S1 is a product of the Chinese tech company Astribot Corporation, a subsidiary of Stardust Intelligence. This firm has made notable strides in the field of AI-controlled humanoid robots, developing the Astribot S1 as a highly efficient helper capable of lifting objects weighing up to 10 kilograms and moving at a speed of 10 meters per second.<ref>https://elblog.pl/2024/04/28/astribot-corporations-s1-robot-promises-swift-and-skilled-assistance/</ref>
AstriBot Corporation started operation in 2022 and took just a year to develop its first humanoid robot, S1.<ref>https://www.msn.com/en-us/news/other/china-s-s1-robot-impresses-with-its-human-like-speed-and-precision/ar-AA1nJ0BG</ref>
Now, let's add this information to the article along with an infobox.
== Overview ==
AstriBot Corporation, a subsidiary of Stardust Intelligence, is a Chinese tech company responsible for the creation and development of the Astribot S1, an efficient and capable humanoid robotic assistant. Known for its impressive parameters, such as the ability to lift up to 10 kilograms and move at speeds of 10 meters per second, Astribot S1 represents significant advancements in the field of AI-controlled humanoid robotics. The company began operations in 2022 and managed to develop its first humanoid robot, Astribot S1, in just a year.
{{infobox company
| name = AstriBot Corporation
| country = China
| website_link =
| robots = Astribot S1
}}
[[Category:Companies]]
[[Category:Robotics]]
== References ==
<references />
55ce9381269bfe75632c051a5df02988fd482821
620
618
2024-04-29T00:41:33Z
194.195.93.78
0
wikitext
text/x-wiki
The [[Astribot S1]] is a product of the Chinese tech company Astribot Corporation, a subsidiary of Stardust Intelligence. This firm has made notable strides in the field of AI-controlled humanoid robots, developing the Astribot S1 as a highly efficient helper capable of lifting objects weighing up to 10 kilograms and moving at a speed of 10 meters per second.<ref>https://elblog.pl/2024/04/28/astribot-corporations-s1-robot-promises-swift-and-skilled-assistance/</ref>
AstriBot Corporation started operation in 2022 and took just a year to develop its first humanoid robot, S1.<ref>https://www.msn.com/en-us/news/other/china-s-s1-robot-impresses-with-its-human-like-speed-and-precision/ar-AA1nJ0BG</ref>
Now, let's add this information to the article along with an infobox.
== Overview ==
AstriBot Corporation, a subsidiary of Stardust Intelligence, is a Chinese tech company responsible for the creation and development of the Astribot S1, an efficient and capable humanoid robotic assistant. Known for its impressive parameters, such as the ability to lift up to 10 kilograms and move at speeds of 10 meters per second, Astribot S1 represents significant advancements in the field of AI-controlled humanoid robotics. The company began operations in 2022 and managed to develop its first humanoid robot, Astribot S1, in just a year.
{{infobox company
| name = AstriBot Corporation
| country = China
| website_link =
| robots = Astribot S1
}}
[[Category:Companies]]
[[Category:Robotics]]
== References ==
<references />
e4b6722330d959ecf676de42ab69cd2a2599cab5
621
620
2024-04-29T00:41:41Z
194.195.93.78
0
wikitext
text/x-wiki
The [[Astribot S1]] is a product of the Chinese tech company Astribot Corporation, a subsidiary of Stardust Intelligence. This firm has made notable strides in the field of AI-controlled humanoid robots, developing the Astribot S1 as a highly efficient helper capable of lifting objects weighing up to 10 kilograms and moving at a speed of 10 meters per second.<ref>https://elblog.pl/2024/04/28/astribot-corporations-s1-robot-promises-swift-and-skilled-assistance/</ref>
AstriBot Corporation started operation in 2022 and took just a year to develop its first humanoid robot, S1.<ref>https://www.msn.com/en-us/news/other/china-s-s1-robot-impresses-with-its-human-like-speed-and-precision/ar-AA1nJ0BG</ref>
Now, let's add this information to the article along with an infobox.
== Overview ==
AstriBot Corporation, a subsidiary of Stardust Intelligence, is a Chinese tech company responsible for the creation and development of the Astribot S1, an efficient and capable humanoid robotic assistant. Known for its impressive parameters, such as the ability to lift up to 10 kilograms and move at speeds of 10 meters per second, Astribot S1 represents significant advancements in the field of AI-controlled humanoid robotics. The company began operations in 2022 and managed to develop its first humanoid robot, Astribot S1, in just a year.
{{infobox company
| name = AstriBot Corporation
| country = China
| website_link =
| robots = Astribot S1
}}
== References ==
<references />
[[Category:Companies]]
61f63fa7bb147f695b9d8109eea1d54387c153bc
Optimus
0
22
622
527
2024-04-29T00:42:07Z
194.195.93.78
0
wikitext
text/x-wiki
[[File:Optimus Tesla (1).jpg|right|200px|thumb]]
Optimus is a humanoid robot developed by [[Tesla]], an American electric vehicle and clean energy company. Also known as Tesla Bot, Optimus is a key component of Tesla's expansion into automation and artificial intelligence technologies.
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| height = 5 ft 8 in (173 cm)
| weight = 58 kg
| video_link = https://www.youtube.com/watch?v=cpraXaw7dyc
| cost = Unknown, rumored $20k
}}
== Development ==
Tesla initiated the development of the Optimus robot in 2021, with the goal of creating a multipurpose utility robot capable of performing unsafe, repetitive, or boring tasks primarily intended for a factory setting. Tesla's CEO, Elon Musk, outlined that Optimus could potentially transition into performing tasks in domestic environments in the future.
== Design ==
The robot stands at a height of 5 feet 8 inches and weighs approximately 58 kilograms. Its design focusses on replacing human labor in hazardous environments, incorporating advanced sensors and algorithms to navigate complex workspaces safely.
== Features ==
The features of Optimus are built around its capability to handle tools, carry out tasks requiring fine motor skills, and interact safely with human environments. The robot is equipped with Tesla's proprietary Full Self-Driving (FSD) computer, allowing it to understand and navigate real-world scenarios effectively.
== Impact ==
Optimus has significant potential implications for labor markets, particularly in industries reliant on manual labor. Its development also sparks discussions on ethics and the future role of robotics in society.
== References ==
* [https://www.tesla.com Tesla official website]
* [https://www.youtube.com/watch?v=cpraXaw7dyc Presentation of Optimus by Tesla]
[[Category:Robots]]
8ba29aa425ac39f70f5cad0bc2c5c560dfa3e217
CyberOne
0
126
623
599
2024-04-29T00:42:19Z
194.195.93.78
0
wikitext
text/x-wiki
CyberOne is a humanoid robot developed by the Chinese consumer electronics giant, Xiaomi. Unveiled in 2022 at a company event in Beijing by the founder, chairman, and CEO, Lei Jun, it is the newest member of Xiaomi's Cyber series, joining previously launched quadruped robots like CyberDog and CyberDog 2<ref>https://robotsguide.com/robots/cyberone</ref>.
{{infobox robot
| name = CyberOne
| organization = [[Xiaomi]]
| height = 177 cm
| weight = 52 kg
| single_arm_payload = 1.5
| runtime =
| walk_speed = 3.6 km/h
| video_link = https://www.youtube.com/watch?v=yBmatGQ0giY
| cost = 600,000 - 700,000 yuan (est.)
}}
== Specifications ==
This bipedal humanoid robot has a height of 177 cm and weight of 52 kg, with an arm span of 168 cm<ref>https://www.gadgets360.com/smart-home/news/xiaomi-cyberone-unveiled-specifications-humanoid-bionic-robot-oled-display-features-3249173</ref>. One of its distinct features is its ability to perceive 3D, recognize individuals, and respond to human emotions<ref>https://www.gadgets360.com/smart-home/news/xiaomi-cyberone-unveiled-specifications-humanoid-bionic-robot-oled-display-features-3249173</ref>. Furthermore, it boasts a top speed of 3.6 km/ph<ref>https://www.theverge.com/2022/8/16/23307808/xiaomi-cyberone-humanoid-robot-tesla-optimus-bot-specs-comparison</ref>.
== Pricing ==
The cost of CyberOne, if ever produced and made available for purchase, is estimated to be around 600,000 to 700,000 yuan<ref>https://robbreport.com/gear/electronics/xiaomi-humanoid-robot-cyberone-1234738597/</ref>.
[[Category:Robots]]
== References ==
<references />
78fcd0eab316db9aa747f8a9e59cf1d0103c2fa6
Toyota Research Institute
0
138
624
603
2024-04-29T00:42:24Z
194.195.93.78
0
wikitext
text/x-wiki
Toyota Research Institute (TRI) is a subsidiary of the multinational automotive manufacturer, Toyota, and is based in Japan. The institute is exploring various facets of robotics and is responsible for building Toyota's third generation humanoid robot, the [[T-HR3]]<ref>[https://spectrum.ieee.org/toyota-gets-back-into-humanoid-robots-with-new-thr3 Toyota Gets Back Into Humanoid Robots With New T-HR3]</ref>.
{{infobox company
| name = Toyota Research Institute
| country = Japan
| website_link = https://global.toyota/
| robots = [[T-HR3]]
}}
== The T-HR3 Robot ==
The T-HR3 is a 1.5-meter humanoid robot designed to interact safely and naturally with its surroundings, and with the person controlling it<ref>[https://mag.toyota.co.uk/t-hr3-toyota-remote-control-robot/ T-HR3 - Toyota's new remote-controlled robot]</ref>. This robot weighs 75 kilograms and features 32 degrees of torque-controlled motion<ref>[https://spectrum.ieee.org/toyota-gets-back-into-humanoid-robots-with-new-thr3 Toyota Gets Back Into Humanoid Robots With New T-HR3]</ref>. It possesses some core capabilities: flexible joint control, whole-body coordination and balance control, and real remote maneuvering – thanks to its Torque Servo Module. These features enable the robot to control the force of contact it makes with any individual or object in its environment, to retain balance even when it collides with objects, and to mirror user movements with seamless and intuitive control<ref>[https://mag.toyota.co.uk/t-hr3-toyota-remote-control-robot/ T-HR3 - Toyota's new remote-controlled robot]</ref>.
The development of the T-HR3 is championed by a research and development group led by Tomohisa Moridaira, based at Toyota's Tokyo Head Office.<ref>[https://global.toyota/en/newsroom/corporate/30609642.html Why is Toyota Developing Humanoid Robots?]</ref>
== References ==
<references />
[[Category:Companies]]
9c3befc469e7cd2f5593df594577954052f5c309
PX5
0
111
625
594
2024-04-29T00:42:48Z
194.195.93.78
0
wikitext
text/x-wiki
The PX5 is a humanoid robot developed by Xpeng, unveiled for the first time during Xpeng Motors' Tech Day in 2023.<ref>https://kr-asia.com/xpeng-motors-unveils-px5-humanoid-robot-underlining-its-vision-for-the-future</ref> The robot stands approximately 1.5 meters in height and is able to navigate different terrain and handle objects with precision, demonstrating remarkable stability.<ref>https://technode.com/2023/10/25/xpeng-tech-day-2023-first-mpv-self-driving-timeline-flying-cars-and-humanoid-robots/</ref> Constructed with a silver-white color scheme, the PX5 is also resistant to shock.<ref>https://kr-asia.com/xpeng-motors-unveils-px5-humanoid-robot-underlining-its-vision-for-the-future</ref>
{{infobox robot
| name = PX5
| organization = [[Xpeng]]
| height = 1.5 meters
| weight =
| video_link = https://www.youtube.com/watch?v=BNSZ8Fwcd20
| cost =
}}
== Development ==
Xpeng Robotics, an ecosystem company of Xpeng, specializing in smart robots, revealed the PX5. The company, which was founded in 2016, innovates in areas like robot powertrain, locomotion control, robot autonomy, robot interaction, and artificial intelligence contributing to a shared mission of exploring future mobility solutions.<ref>https://www.pxing.com/en/about</ref>
== Design and Capabilities ==
The PX5 bears a striking silver-white finish and exhibits resistance to shock. Its capability to navigate through different terrains and handle handheld objects, such as a pen, with exceptional stability has been highlighted in demonstrations.<ref>https://kr-asia.com/xpeng-motors-unveils-px5-humanoid-robot-underlining-its-vision-for-the-future</ref> <ref>https://technode.com/2023/10/25/xpeng-tech-day-2023-first-mpv-self-driving-timeline-flying-cars-and-humanoid-robots/</ref>
== References ==
<references />
[[Category:Robots]]
2e0b3a4cde3f99bf769a1430f4ca8e1e7473db9b
Tesla
0
7
626
505
2024-04-29T00:43:05Z
194.195.93.78
0
wikitext
text/x-wiki
Tesla, an innovative company known for its electric vehicles and sustainable energy solutions, has embarked on the development of a humanoid robot named Optimus.
Optimus is envisioned to be a groundbreaking humanoid robot designed by Tesla to perform a wide range of tasks autonomously. Tesla's foray into humanoid robotics with Optimus showcases the company's commitment to advancing technology beyond the realm of electric vehicles.
This project signifies Tesla's exploration of robotics and artificial intelligence beyond its traditional automotive focus. By venturing into humanoid robot development, Tesla aims to revolutionize industries and everyday tasks with cutting-edge automation.
For more information about Tesla and its humanoid robot Optimus, interested individuals can visit Tesla's official website.
{{infobox company
| name = Tesla
| country = United States
| website_link = https://www.tesla.com/
| robots = [[Optimus]]
}}
[[Category: Companies]]
8f3e3f148f5bf53217a135dfa81954f14279d838
Reinforcement Learning
0
34
628
340
2024-04-29T01:52:40Z
194.195.93.80
0
wikitext
text/x-wiki
== Training algorithms ==
* [https://en.wikipedia.org/wiki/Advantage_Actor_Critic A2C]
* [https://en.wikipedia.org/wiki/Proximal_policy_optimization PPO]
* [https://spinningup.openai.com/en/latest/algorithms/sac.html SAC]
== Resources ==
* [https://mandi-zhao.gitbook.io/deeprl-notes Mandy Zhao's Reinforcement Learning Notes]
c82f1e0fe0e614e045c68823bb1be532e1623391
Kayra
0
153
633
2024-04-29T17:41:09Z
185.187.168.105
0
Created page with "Kayra is an open-source 3D printable robot. The project is avalable [https://github.com/assadollahi/kayra here]. [[Category: Robots]] [[Category: Stompy, Expand!]]"
wikitext
text/x-wiki
Kayra is an open-source 3D printable robot. The project is avalable [https://github.com/assadollahi/kayra here].
[[Category: Robots]]
[[Category: Stompy, Expand!]]
7115b976f10a17e4956934704886a012415e95f2
635
633
2024-04-29T18:04:19Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
'''Kayra''' is an open-source 3D printable humanoid robot designed for modify-ability and easy assembly. The project files, including design files for freeCAD, can be found on the [https://github.com/assadollahi/kayra GitHub platform]. Kayra's development is community-based, with goals to evolve the design in various directions.<ref>[https://github.com/assadollahi/kayra GitHub - assadollahi/kayra]</ref>
Kayra is designed to be 3D printed and assembled with minimal effort. Ideally, no soldering is necessary for assembly. The software for Kayra can be written in Python for the host and Micropython for the controllers. It also includes simulation software, enabling users to work on the project even without physical hardware. The cost for a minimal system is intended to be low, enhancing accessibility and encouraging community participation.<ref>[https://github.com/assadollahi/kayra GitHub - assadollahi/kayra]</ref>
Tutorial videos are available, providing real-time assembly instructions. For example, one recorded tutorial guides users through the process of assembling Kayra's left leg from printed parts, servos, bearings, and screws. The leg assembly can be completed within approximately 20 minutes.<ref>[https://kayra.org/ Home - the 3D printable, easy to modify, open-source humanoid]</ref>
Please refer to the official [https://kayra.org/ Kayra website] and its [https://github.com/assadollahi/kayra GitHub page] for full details, shopping list, print instructions, and more.
{{infobox robot
| name = Kayra
| organization = Open Source Community
| video_link =
| cost = Low
| height =
| weight =
| speed =
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof =
| status = Active development
}}
[[Category: Robots]]
[[Category: Open Source Robots]]
[[Category: 3D Printable Robots]]
== References ==
<references />
5cabe363b1a418b4f1a230c9f58d76f3fc3c17d2
655
635
2024-04-29T19:28:51Z
Ben
2
wikitext
text/x-wiki
'''Kayra''' is an open-source 3D printable humanoid robot designed for modify-ability and easy assembly. The project files, including design files for freeCAD, can be found on the [https://github.com/assadollahi/kayra GitHub platform]. Kayra's development is community-based, with goals to evolve the design in various directions.<ref>[https://github.com/assadollahi/kayra GitHub - assadollahi/kayra]</ref>
Kayra is designed to be 3D printed and assembled with minimal effort. Ideally, no soldering is necessary for assembly. The software for Kayra can be written in Python for the host and Micropython for the controllers. It also includes simulation software, enabling users to work on the project even without physical hardware. The cost for a minimal system is intended to be low, enhancing accessibility and encouraging community participation.<ref>[https://github.com/assadollahi/kayra GitHub - assadollahi/kayra]</ref>
Tutorial videos are available, providing real-time assembly instructions. For example, one recorded tutorial guides users through the process of assembling Kayra's left leg from printed parts, servos, bearings, and screws. The leg assembly can be completed within approximately 20 minutes.<ref>[https://kayra.org/ Home - the 3D printable, easy to modify, open-source humanoid]</ref>
Please refer to the official [https://kayra.org/ Kayra website] and its [https://github.com/assadollahi/kayra GitHub page] for full details, shopping list, print instructions, and more.
{{infobox robot
| name = Kayra
| organization = Open Source Community
| video_link =
| cost = Low
| height =
| weight =
| speed =
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof =
| status = Active development
}}
== References ==
<references />
[[Category:Robots]]
[[Category:Open Source]]
4600808d10da0493e037db3548ca5899dca72d80
Building a PCB
0
41
636
204
2024-04-29T18:14:45Z
Ben
2
wikitext
text/x-wiki
Walk-through and notes regarding how to design and ship a PCB.
== Resources ==
* [[atopile]]
[[Category: Hardware]]
[[Category: Guides]]
[[Category: Electronics]]
4e1c981e24ebe49198f05f433fbf564c8fd0164d
641
636
2024-04-29T18:20:55Z
Ben
2
wikitext
text/x-wiki
Walk-through and notes regarding how to design and ship a PCB.
== Related Articles ==
* [[atopile]]
[[Category: Hardware]]
[[Category: Guides]]
[[Category: Electronics]]
f213abd306b6687a1a042e9379c751b442529baa
644
641
2024-04-29T19:05:45Z
Matt
16
Add initial atopile information and ordering information
wikitext
text/x-wiki
Walk-through and notes regarding how to design and ship a PCB.
== Designing with atopile ==
[[atopile]] enables code-defined pcb design. Follow atopile's [https://atopile.io/getting-started/ getting-started] guide to set up your project.
An example atopile PCB project is provided by the K-Scale Labs team [https://github.com/kscalelabs/atopile-pcb-example here]
== Ordering a PCB ==
Trusted low production PCB manufacturing companies:
* PCBWay
* JLCPCB
* SeeedStudio
Further PCB manufacturers and price comparisons for your specific project can be found [https://pcbshopper.com/ here]
== Related Articles ==
* [[atopile]]
[[Category: Hardware]]
[[Category: Guides]]
[[Category: Electronics]]
e45da6806676211370e2ff8f06ffd083ebf26f70
651
644
2024-04-29T19:27:02Z
Ben
2
wikitext
text/x-wiki
Walk-through and notes regarding how to design and ship a PCB.
== Designing with atopile ==
[[atopile]] enables code-defined pcb design. Follow atopile's [https://atopile.io/getting-started/ getting-started] guide to set up your project.
An example atopile PCB project is provided by the K-Scale Labs team [https://github.com/kscalelabs/atopile-pcb-example here]
== Ordering a PCB ==
Trusted low production PCB manufacturing companies:
* PCBWay
* JLCPCB
* SeeedStudio
Further PCB manufacturers and price comparisons for your specific project can be found [https://pcbshopper.com/ here]
== Related Articles ==
* [[atopile]]
[[Category: Hardware]]
[[Category: Guides]]
[[Category: Electronics]]
dd02e3b1bcc54906783d0bea890acc2e5e774893
Atopile
0
154
637
2024-04-29T18:15:21Z
Ben
2
Created page with "[https://atopile.io/ atopile] is a language and toolchain to describe electronic circuit boards with code. [[Category: Stompy, Expand!]]"
wikitext
text/x-wiki
[https://atopile.io/ atopile] is a language and toolchain to describe electronic circuit boards with code.
[[Category: Stompy, Expand!]]
4e88bf0667de28907e1ccbf8e2eff9e479c6af0c
639
637
2024-04-29T18:19:36Z
Ben
2
wikitext
text/x-wiki
[https://atopile.io/ atopile] is a language and toolchain to describe electronic circuit boards with code.
{{#related:Building a PCB}}
[[Category: Stompy, Expand!]]
[[Category: Electronics]]
d113d7c160db72cfd7c950b81597ecc76b176014
640
639
2024-04-29T18:20:43Z
Ben
2
wikitext
text/x-wiki
[https://atopile.io/ atopile] is a language and toolchain to describe electronic circuit boards with code.
=== Related Articles ===
* [[Building a PCB]]
[[Category: Stompy, Expand!]]
[[Category: Electronics]]
344f0cd074e28ef476682122fc310f706b67b2ba
642
640
2024-04-29T19:04:16Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
Here is an expanded article on Atopile.
Atopile is a programming language and integrated toolchain specifically designed for the process of describing electronic circuit boards, streamlining creation, and modification. Atopile provides an alternative approach to traditional schematic capture methods, offering a way to transcribe circuit design into organized, scalable, and reusable code.
{{infobox company
| name = Atopile
| country = [Unknown]
| website_link = https://atopile.io/
| robots = [None]
}}
== Overview ==
Atopile's unique approach of treating hardware design like software development paves the way for features such as version control, automated testing, and modularity, which have long been staples in the software development industry. By allowing circuit designs to be written, understood, and manipulated as code, Atopile enables a new level of flexibility and efficiency for electronics engineers.
== Functionality ==
The Atopile language and toolchain is equipped with a range of features that enhance both the design and prototyping stages of electronic circuit board development. Some core functionalities include:
* '''Iterative Development:''' Atopile supports version control, allowing for iterative development, history tracking, and collaborative capabilities, similar to software development environments. This process avails design revision and controlled evolution of circuit boards, improving product quality over time because earlier designs can be updated and improved rather than discarded.
* '''Modularity:''' With the language used in Atopile, circuit designs can be broken down into modules, making them reusable across different projects. This function promotes better code organization and reduces redundancy.
* '''Automated Testing:''' Like in software development, automated testing can be conducted within Atopile's ecosystem, ensuring correctness and reliability of designs and minimizing chances of hardware failures.
* '''Compatibility:''' Atopile's code can be exported to compatible formats for other electronic design automation (EDA) tools. This offers a higher level of versatility and interoperability with other development environments.
== Use Cases ==
Atopile can be utilized for a wide range of applications, including academic research, commercial product development, prototyping, DIY electronics project, and educational purposes.
== Constructive Criticism ==
As with any tool, Atopile is not without its criticisms. One primary concern is the steep learning curve associated with mastering a new language. This might pose difficulties for those not familiar with software development principles. Furthermore, while the abstraction of circuit design into code offers many benefits, it may conceal certain hardware complexities that are more intuitively understood in traditional visual schematic form.
== References ==
<references />
=== Related Articles ===
* [[Building a PCB]]
[[Category: Electronics]]
8880bfe03ef6f0200ad229c9f0db1af69642ae4f
645
642
2024-04-29T19:06:51Z
Ben
2
wikitext
text/x-wiki
Atopile is a programming language and integrated toolchain specifically designed for the process of describing electronic circuit boards, streamlining creation, and modification. Atopile provides an alternative approach to traditional schematic capture methods, offering a way to transcribe circuit design into organized, scalable, and reusable code.
{{infobox company
| name = Atopile
| country = United States
| website_link = https://atopile.io/
}}
== Overview ==
Atopile's unique approach of treating hardware design like software development paves the way for features such as version control, automated testing, and modularity, which have long been staples in the software development industry. By allowing circuit designs to be written, understood, and manipulated as code, Atopile enables a new level of flexibility and efficiency for electronics engineers.
== Functionality ==
The Atopile language and toolchain is equipped with a range of features that enhance both the design and prototyping stages of electronic circuit board development. Some core functionalities include:
* '''Iterative Development:''' Atopile supports version control, allowing for iterative development, history tracking, and collaborative capabilities, similar to software development environments. This process avails design revision and controlled evolution of circuit boards, improving product quality over time because earlier designs can be updated and improved rather than discarded.
* '''Modularity:''' With the language used in Atopile, circuit designs can be broken down into modules, making them reusable across different projects. This function promotes better code organization and reduces redundancy.
* '''Automated Testing:''' Like in software development, automated testing can be conducted within Atopile's ecosystem, ensuring correctness and reliability of designs and minimizing chances of hardware failures.
* '''Compatibility:''' Atopile's code can be exported to compatible formats for other electronic design automation (EDA) tools. This offers a higher level of versatility and interoperability with other development environments.
== Use Cases ==
Atopile can be utilized for a wide range of applications, including academic research, commercial product development, prototyping, DIY electronics project, and educational purposes.
== Related Articles ==
* [[Building a PCB]]
== References ==
<references />
[[Category: Electronics]]
19c82d4e718a9555656ac519604bd09f344e31c3
Controller Area Network (CAN)
0
155
638
2024-04-29T18:17:09Z
Ben
2
Created page with "The Controller Area Network is a vehicle bus standard designed to allow microcontrollers and devices to communicate with each other. === MCP2515 === The MCP2515 chip is an i..."
wikitext
text/x-wiki
The Controller Area Network is a vehicle bus standard designed to allow microcontrollers and devices to communicate with each other.
=== MCP2515 ===
The MCP2515 chip is an integrated circuit that acts as a stand-alone CAN controller with SPI interface.
[[Category: Stompy, Expand!]]
a7d5f16821b9db1493f1d7b3608f39545120afb7
643
638
2024-04-29T19:04:46Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
{{infobox actuator
| name = MCP2515
| manufacturer = Microchip Technology
| cost = Not listed
| purchase_link = Not listed
| nominal_torque = Not applicable
| peak_torque = Not applicable
| weight = Not applicable
| dimensions = Not listed
| gear_ratio = Not applicable
| voltage = Not listed
| cad_link = Not listed
| interface = SPI interface
| gear_type = Not applicable
}}
== Controller Area Network ==
The '''Controller Area Network''' (CAN) represents an essential vehicle bus standard. It was explicitly engineered to facilitate communication between microcontrollers and devices without resorting to a central computer. The primary purpose of CAN is to enhance vehicle interconnectivity and foster swift data exchange between various systems within the vehicle.
=== MCP2515 ===
The '''MCP2515''' is an integrated circuit produced by Microchip Technology, constructed to function as a stand-alone CAN controller. It exhibits compatibility with an SPI (Serial Peripheral Interface) which makes it versatile in various applications. Although its primary use is in automotive industries, it is also used in a variety of other control applications.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515 bridges the connection between the CAN protocol and the SPI protocol by receiving CAN messages and translating them into SPI data, allowing the microcontroller to interpret the information. Similarly, it transforms SPI data into CAN messages for transmission.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515's operational details include its capacity to support CAN 2.0A and B, drawing attention to its alignment with established CAN standards and equipping it for both basic and extended frame format usage.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
== References ==
<references />
[[Category: Controller Area Network]]
[[Category: Integrated Circuits]]
[[Category: Automotive Technology]]
[[Category: Robotics Communications]]
9935ca0e82d1837ac65a1dfdd4d394afbb60023f
646
643
2024-04-29T19:07:18Z
Ben
2
wikitext
text/x-wiki
{{infobox actuator
| name = MCP2515
| manufacturer = Microchip Technology
| cost = Not listed
| purchase_link = Not listed
| nominal_torque = Not applicable
| peak_torque = Not applicable
| weight = Not applicable
| dimensions = Not listed
| gear_ratio = Not applicable
| voltage = Not listed
| cad_link = Not listed
| interface = SPI interface
| gear_type = Not applicable
}}
== Controller Area Network ==
The '''Controller Area Network''' (CAN) represents an essential vehicle bus standard. It was explicitly engineered to facilitate communication between microcontrollers and devices without resorting to a central computer. The primary purpose of CAN is to enhance vehicle interconnectivity and foster swift data exchange between various systems within the vehicle.
=== MCP2515 ===
The '''MCP2515''' is an integrated circuit produced by Microchip Technology, constructed to function as a stand-alone CAN controller. It exhibits compatibility with an SPI (Serial Peripheral Interface) which makes it versatile in various applications. Although its primary use is in automotive industries, it is also used in a variety of other control applications.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515 bridges the connection between the CAN protocol and the SPI protocol by receiving CAN messages and translating them into SPI data, allowing the microcontroller to interpret the information. Similarly, it transforms SPI data into CAN messages for transmission.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515's operational details include its capacity to support CAN 2.0A and B, drawing attention to its alignment with established CAN standards and equipping it for both basic and extended frame format usage.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
== References ==
<references />
0e7169ff1fa8c99f2c6bbbd801a3e0d8739250d6
647
646
2024-04-29T19:07:32Z
Ben
2
wikitext
text/x-wiki
== Controller Area Network ==
The '''Controller Area Network''' (CAN) represents an essential vehicle bus standard. It was explicitly engineered to facilitate communication between microcontrollers and devices without resorting to a central computer. The primary purpose of CAN is to enhance vehicle interconnectivity and foster swift data exchange between various systems within the vehicle.
=== MCP2515 ===
The '''MCP2515''' is an integrated circuit produced by Microchip Technology, constructed to function as a stand-alone CAN controller. It exhibits compatibility with an SPI (Serial Peripheral Interface) which makes it versatile in various applications. Although its primary use is in automotive industries, it is also used in a variety of other control applications.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515 bridges the connection between the CAN protocol and the SPI protocol by receiving CAN messages and translating them into SPI data, allowing the microcontroller to interpret the information. Similarly, it transforms SPI data into CAN messages for transmission.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515's operational details include its capacity to support CAN 2.0A and B, drawing attention to its alignment with established CAN standards and equipping it for both basic and extended frame format usage.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
== References ==
<references />
39616c557f52b6113b96a1dd89c5748ae513ec67
648
647
2024-04-29T19:07:40Z
Ben
2
wikitext
text/x-wiki
The '''Controller Area Network''' (CAN) represents an essential vehicle bus standard. It was explicitly engineered to facilitate communication between microcontrollers and devices without resorting to a central computer. The primary purpose of CAN is to enhance vehicle interconnectivity and foster swift data exchange between various systems within the vehicle.
=== MCP2515 ===
The '''MCP2515''' is an integrated circuit produced by Microchip Technology, constructed to function as a stand-alone CAN controller. It exhibits compatibility with an SPI (Serial Peripheral Interface) which makes it versatile in various applications. Although its primary use is in automotive industries, it is also used in a variety of other control applications.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515 bridges the connection between the CAN protocol and the SPI protocol by receiving CAN messages and translating them into SPI data, allowing the microcontroller to interpret the information. Similarly, it transforms SPI data into CAN messages for transmission.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515's operational details include its capacity to support CAN 2.0A and B, drawing attention to its alignment with established CAN standards and equipping it for both basic and extended frame format usage.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
== References ==
<references />
614e870bceeec16427c679eed2852cbb10cf2edb
Stompy To-Do List
0
156
649
2024-04-29T19:24:50Z
Ben
2
Created page with "This document acts as a progress tracker for the [[Stompy]] robot build. == To Do == === Teleop === * Create a VR interface === Firmware === * Get * === Electronics ===..."
wikitext
text/x-wiki
This document acts as a progress tracker for the [[Stompy]] robot build.
== To Do ==
=== Teleop ===
* Create a VR interface
=== Firmware ===
* Get
*
=== Electronics ===
* Understand how to build a PCB and update the [[Building a PCB]] guide
** Get PCBs designed and shipped
** Two PCBs: One for the head and one for the body
*** Head PCB will mount a Jetson Nano. It will have two MIPI CSI ports for two cameras from the eyes, plus audio and speakers. The Jetson Nano will act as a "tokenizer" which multiplexes the signals over ethernet to the main board in the body
*** The body PCB will mount a full Jetson Orin, as well as the IMU, CAN tranceivers, and power management
* Debug issues with the [[CAN Bus]] - specifically, write our own driver for the MCP2515 IC (one of the most common ICs for sending and receiving messages over a CAN bus)
9a09320fe88f3c53fd3dc5655abf9af384bcc05d
650
649
2024-04-29T19:25:37Z
Ben
2
wikitext
text/x-wiki
This document acts as a progress tracker for the [[Stompy]] robot build.
== To Do ==
=== Teleop ===
* Create a VR interface for controlling Stompy in simulation
* Port VR interface to work on the full robot
=== Firmware ===
* Debug issues with the [[CAN Bus]] - specifically, write our own driver for the MCP2515 IC (one of the most common ICs for sending and receiving messages over a CAN bus)
* Set up continuous integration for building and flashing the full operating system
=== Electronics ===
* Understand how to build a PCB and update the [[Building a PCB]] guide
** Get PCBs designed and shipped
** Two PCBs: One for the head and one for the body
*** Head PCB will mount a Jetson Nano. It will have two MIPI CSI ports for two cameras from the eyes, plus audio and speakers. The Jetson Nano will act as a "tokenizer" which multiplexes the signals over ethernet to the main board in the body
*** The body PCB will mount a full Jetson Orin, as well as the IMU, CAN tranceivers, and power management
2b51e0576fc0d8cedc6a549f38b4dfffb590264b
661
650
2024-04-29T19:31:26Z
Ben
2
wikitext
text/x-wiki
This document acts as a progress tracker for the [[Stompy]] robot build.
== To Do ==
=== Teleop ===
* Create a VR interface for controlling Stompy in simulation
* Port VR interface to work on the full robot
=== Firmware ===
* Debug issues with the [[CAN Bus]] - specifically, write our own driver for the MCP2515 IC (one of the most common ICs for sending and receiving messages over a CAN bus)
* Set up continuous integration for building and flashing the full operating system
=== Actuator ===
* Build on top of a good open-source actuator like the [[SPIN Drive]] or [[OBot]] actuator to design and fabricate our own open-source actuator
=== Electronics ===
* Understand how to build a PCB and update the [[Building a PCB]] guide
** Get PCBs designed and shipped
** Two PCBs: One for the head and one for the body
*** Head PCB will mount a Jetson Nano. It will have two MIPI CSI ports for two cameras from the eyes, plus audio and speakers. The Jetson Nano will act as a "tokenizer" which multiplexes the signals over ethernet to the main board in the body
*** The body PCB will mount a full Jetson Orin, as well as the IMU, CAN tranceivers, and power management
79ce754f2b12deb624c8aa0bb78062287cd2b8eb
662
661
2024-04-29T19:31:37Z
Ben
2
wikitext
text/x-wiki
This document acts as a progress tracker for the [[Stompy]] robot build.
== To Do ==
=== Teleop ===
* Create a VR interface for controlling Stompy in simulation
* Port VR interface to work on the full robot
=== Firmware ===
* Debug issues with the [[CAN Bus]] - specifically, write our own driver for the MCP2515 IC (one of the most common ICs for sending and receiving messages over a CAN bus)
* Set up continuous integration for building and flashing the full operating system
=== Actuator ===
* Build on top of a good open-source actuator like the [[SPIN Servo]] or [[OBot]] actuator to design and fabricate our own open-source actuator
=== Electronics ===
* Understand how to build a PCB and update the [[Building a PCB]] guide
** Get PCBs designed and shipped
** Two PCBs: One for the head and one for the body
*** Head PCB will mount a Jetson Nano. It will have two MIPI CSI ports for two cameras from the eyes, plus audio and speakers. The Jetson Nano will act as a "tokenizer" which multiplexes the signals over ethernet to the main board in the body
*** The body PCB will mount a full Jetson Orin, as well as the IMU, CAN tranceivers, and power management
59fb9df3f41b2096a50905959e9fa55647f5c197
663
662
2024-04-29T19:33:58Z
Ben
2
wikitext
text/x-wiki
This document acts as a progress tracker for the [[Stompy]] robot build.
== To Do ==
=== Teleop ===
* Create a VR interface for controlling Stompy in simulation
* Port VR interface to work on the full robot
=== Firmware ===
* Debug issues with the [[CAN Bus]] - specifically, write our own driver for the MCP2515 IC (one of the most common ICs for sending and receiving messages over a CAN bus)
* Set up continuous integration for building and flashing the full operating system
=== Actuator ===
* Build on top of a good open-source actuator like the [[SPIN Servo]], [[MIT Cheetah]] or [[OBot]] actuator to design and fabricate our own open-source actuator
=== Electronics ===
* Understand how to build a PCB and update the [[Building a PCB]] guide
** Get PCBs designed and shipped
** Two PCBs: One for the head and one for the body
*** Head PCB will mount a Jetson Nano. It will have two MIPI CSI ports for two cameras from the eyes, plus audio and speakers. The Jetson Nano will act as a "tokenizer" which multiplexes the signals over ethernet to the main board in the body
*** The body PCB will mount a full Jetson Orin, as well as the IMU, CAN tranceivers, and power management
c441f8a9aae494590ecc65bf4d11642640e16fed
SPIN Servo
0
90
652
566
2024-04-29T19:27:47Z
Ben
2
wikitext
text/x-wiki
{{infobox actuator
| name = SPIN Servo
| manufacturer = Holry Motor
| cost = USD 30 (BOM)
| purchase_link = https://shop.atopile.io/
| nominal_torque = 0.125nm
| peak_torque = 0.375nm
| weight = 311.6g
| dimensions = 42mmx42mmx60mm
| gear_ratio = Direct drive (bolt on options)
| voltage = 12V-24V
| cad_link = https://github.com/atopile/spin-servo-drive/tree/main/mech
| interface = CAN bus, i2c
}}
== Overview ==
The SPIN Servo is an open-source hardware project, designed to make the use of fully-fledged Brushless DC (BLDC) servo motors easy and cost-effective<ref>[https://github.com/atopile/spin-servo-drive GitHub - atopile/spin-servo-drive: SPIN - Servos are awesome]</ref>. It is primarily engineered by atopile, which is known for its toolchains to describe electronic circuit boards with code<ref>[https://atopile.io/spin/ SPIN - atopile]</ref>. The intention behind this project is to introduce software development workflows like reuse, validation, and automation into the world of electronics.
The SPIN Servo is manufactured by the Holry Motor company. It weighs 311.6g and its dimensions are 42mm x 42mm x 60mm. The cost of the Bill of Materials (BOM) is USD 30. The nominal torque of the spin servo is 0.125nm, while it can peak at 0.375nm. The interface employed by this servo is CAN bus and i2c.
It operates at a voltage of 12V-24V and its gear ratio is directly driven, although there are bolt-on options available. All designs and schematics related to the SPIN Servo can be found at their official GitHub repository<ref>[https://github.com/atopile/spin-servo-drive/tree/main/mech SPIN Servo CAD - GitHub]</ref>.
Interested individuals can purchase the SPIN Servo from the official atopile shop<ref>[https://shop.atopile.io/ Atopile Shop]</ref>.
== References ==
<references />
[[Category:Actuators]]
[[Category:Open Source]]
23a86b767871a4cecb7f4401a2d6fcf758be62da
Category:Open Source
14
157
653
2024-04-29T19:28:17Z
Ben
2
Created page with "This is a tag for projects which are open-source, meaning that you can see the full source code or reference design online."
wikitext
text/x-wiki
This is a tag for projects which are open-source, meaning that you can see the full source code or reference design online.
4e1aa0a75a7c9435ba02eb3faa92d44ac44a282c
OBot
0
89
654
380
2024-04-29T19:28:29Z
Ben
2
wikitext
text/x-wiki
The [https://github.com/unhuman-io/obot OBot] is an open-source robot designed for doing mobile manipulation.
{{infobox actuator
| name = OBot
| voltage = 36V
| interface = USB
}}
[[Category:Actuators]]
[[Category:Open Source]]
30ae1a1677c60e74e065e7c3b42eeaf2930021c4
Stompy
0
2
656
258
2024-04-29T19:29:12Z
Ben
2
wikitext
text/x-wiki
[[File:Stompy.jpg|right|300px|thumb]]
{{infobox robot
| name = Stompy
| organization = [[K-Scale Labs]]
| cost = USD 10,000
}}
Stompy is an open-source humanoid robot developed by [[K-Scale Labs]]. See the [[Stompy Build Guide|build guide]] for a walk-through of how to build one yourself.
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
= Simulation =
For the latest simulation artifacts, see [https://kscale.dev/ the website].
[[Category:Robots]]
[[Category:Open Source]]
e8361cf4bdf128021548ec040d15f56e3818d149
657
656
2024-04-29T19:29:40Z
Ben
2
wikitext
text/x-wiki
[[File:Stompy.jpg|right|300px|thumb]]
{{infobox robot
| name = Stompy
| organization = [[K-Scale Labs]]
| cost = USD 10,000
}}
Stompy is an open-source humanoid robot developed by [[K-Scale Labs]]. Here are some additional relevant links:
- [[Stompy To-Do List]]
- [[Stompy Build Guide]]
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
= Simulation =
For the latest simulation artifacts, see [https://kscale.dev/ the website].
[[Category:Robots]]
[[Category:Open Source]]
01d49b67e3b9bea5c5dbc1342b46867c66e1229f
658
657
2024-04-29T19:29:49Z
Ben
2
wikitext
text/x-wiki
[[File:Stompy.jpg|right|300px|thumb]]
{{infobox robot
| name = Stompy
| organization = [[K-Scale Labs]]
| cost = USD 10,000
}}
Stompy is an open-source humanoid robot developed by [[K-Scale Labs]]. Here are some additional relevant links:
* [[Stompy To-Do List]]
* [[Stompy Build Guide]]
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
= Simulation =
For the latest simulation artifacts, see [https://kscale.dev/ the website].
[[Category:Robots]]
[[Category:Open Source]]
7f573535d974c1f16f164e4da7f0899ee39341c4
659
658
2024-04-29T19:30:19Z
Ben
2
wikitext
text/x-wiki
[[File:Stompy.jpg|right|300px|thumb]]
{{infobox robot
| name = Stompy
| organization = [[K-Scale Labs]]
| cost = USD 10,000
}}
Stompy is an open-source humanoid robot developed by [[K-Scale Labs]]. Here are some relevant links:
* [[Stompy To-Do List]]
* [[Stompy Build Guide]]
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
= Simulation =
For the latest simulation artifacts, see [https://kscale.dev/ the website].
[[Category:Robots]]
[[Category:Open Source]]
33c44a58c07be35e2c900baebd97148d361b59e1
660
659
2024-04-29T19:30:31Z
Ben
2
wikitext
text/x-wiki
[[File:Stompy.jpg|right|300px|thumb]]
{{infobox robot
| name = Stompy
| organization = [[K-Scale Labs]]
| cost = USD 10,000
}}
Stompy is an open-source humanoid robot developed by [[K-Scale Labs]]. Here are some relevant links:
* [[Stompy To-Do List]]
* [[Stompy Build Guide]]
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
= Simulation =
For the latest simulation artifacts, see [https://kscale.dev/ the website].
[[Category:Robots]]
[[Category:Open Source]]
48c3a6cc32ac0e37d5c2a41f9f10e003ca621c39
MIT Cheetah
0
158
664
2024-04-29T19:34:08Z
Ben
2
Created page with "The '''MIT Cheetah''' is an open-source quadropedal robot designed using low-inertia actuators. [[Category:Actuators]] [[Category:Open Source]]"
wikitext
text/x-wiki
The '''MIT Cheetah''' is an open-source quadropedal robot designed using low-inertia actuators.
[[Category:Actuators]]
[[Category:Open Source]]
6acfa8a494e4e34bd08d0fd53caae78b9a02b33b
MIT Cheetah
0
158
665
664
2024-04-29T19:34:37Z
Ben
2
wikitext
text/x-wiki
The '''MIT Cheetah''' is an open-source quadropedal robot designed using low-inertia actuators.
[[Category:Actuators]]
[[Category:Open Source]]
[[Category:Stompy, Expand!]]
606f8209b269ef63917f9c250b46129808ca17fc
666
665
2024-04-29T20:04:32Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
'''MIT Cheetah''' is an open-source quadropedal robot designed using low-inertia actuators. This robot is developed in the lab of Professor Sangbae Kim at MIT<ref>https://www.csail.mit.edu/news/one-giant-leap-mini-cheetah</ref>. The MIT Cheetah is known for its agility and ability to adapt to varying terrain conditions without requiring a terrain map in advance<ref>https://www.csail.mit.edu/news/one-giant-leap-mini-cheetah</ref>.
{{infobox robot
| name = MIT Cheetah
| organization = MIT
| video_link =
| cost =
| height =
| weight = 20 pounds
| speed = Twice the speed of average human walking speed<ref>https://news.mit.edu/2019/mit-mini-cheetah-first-four-legged-robot-to-backflip-0304</ref>,<ref>http://robotics.mit.edu/mini-cheetah-first-four-legged-robot-do-backflip</ref>
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof =
| status = Active
}}
== Design and Development ==
The MIT Cheetah displays remarkable adaptability in its functions. Despite being only 20 pounds, the robot can bend and swing its legs wide, allowing it to either walk right side up or upside down<ref>https://news.mit.edu/2019/mit-mini-cheetah-first-four-legged-robot-to-backflip-0304</ref>. This flexibility is a result of its design that prioritizes a wide range of motion.
The robot is further equipped to deal with challenges posed by rough, uneven terrain. It can traverse such landscapes at a pace twice as fast as an average person's walking speed<ref>http://robotics.mit.edu/mini-cheetah-first-four-legged-robot-do-backflip</ref>. The MIT Cheetah’s design, particularly the implementation of a control system that enables agile running, has been largely driven by the "learn-by-experience model". This approach, in contrast to previous designs reliant primarily on human analytical insights, allows the robot to respond quickly to changes in the environment<ref>https://news.mit.edu/2022/3-questions-how-mit-mini-cheetah-learns-run-fast-0317</ref>.
== See also ==
* [[Sangbae Kim]]
* [[Low-inertia actuator]]
== References ==
<references />
[[Category:Actuators]]
[[Category:Open Source]]
[[Category:Robots]]
3d3130a1540c5784fc19f305d84994a8c68900a3
667
666
2024-04-29T20:07:43Z
Ben
2
wikitext
text/x-wiki
'''MIT Cheetah''' is an open-source quadropedal robot designed using low-inertia actuators. This robot is developed in the lab of Professor Sangbae Kim at MIT<ref>https://www.csail.mit.edu/news/one-giant-leap-mini-cheetah</ref>. The MIT Cheetah is known for its agility and ability to adapt to varying terrain conditions without requiring a terrain map in advance<ref>https://www.csail.mit.edu/news/one-giant-leap-mini-cheetah</ref>.
{{infobox robot
| name = MIT Cheetah
| organization = MIT
| video_link =
| cost =
| height =
| weight = 20 pounds
| speed = Twice the speed of average human walking speed<ref>https://news.mit.edu/2019/mit-mini-cheetah-first-four-legged-robot-to-backflip-0304</ref>,<ref>http://robotics.mit.edu/mini-cheetah-first-four-legged-robot-do-backflip</ref>
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof =
| status = Active
}}
== Design and Development ==
The MIT Cheetah displays remarkable adaptability in its functions. Despite being only 20 pounds, the robot can bend and swing its legs wide, allowing it to either walk right side up or upside down<ref>https://news.mit.edu/2019/mit-mini-cheetah-first-four-legged-robot-to-backflip-0304</ref>. This flexibility is a result of its design that prioritizes a wide range of motion.
The robot is further equipped to deal with challenges posed by rough, uneven terrain. It can traverse such landscapes at a pace twice as fast as an average person's walking speed<ref>http://robotics.mit.edu/mini-cheetah-first-four-legged-robot-do-backflip</ref>. The MIT Cheetah’s design, particularly the implementation of a control system that enables agile running, has been largely driven by the "learn-by-experience model". This approach, in contrast to previous designs reliant primarily on human analytical insights, allows the robot to respond quickly to changes in the environment<ref>https://news.mit.edu/2022/3-questions-how-mit-mini-cheetah-learns-run-fast-0317</ref>.
== References ==
<references />
[[Category:Actuators]]
[[Category:Open Source]]
[[Category:Robots]]
e07bb2c4d991044cf16c8b82d6cbb41c0dd2ab34
668
667
2024-04-29T20:08:16Z
Ben
2
wikitext
text/x-wiki
'''MIT Cheetah''' is an open-source quadropedal robot designed using low-inertia actuators. This robot is developed in the lab of Professor Sangbae Kim at MIT<ref>https://www.csail.mit.edu/news/one-giant-leap-mini-cheetah</ref>. The MIT Cheetah is known for its agility and ability to adapt to varying terrain conditions without requiring a terrain map in advance
{{infobox robot
| name = MIT Cheetah
| organization = MIT
| video_link =
| cost =
| height =
| weight = 20 pounds
| speed = Twice the speed of average human walking speed<ref>https://news.mit.edu/2019/mit-mini-cheetah-first-four-legged-robot-to-backflip-0304</ref>,<ref>http://robotics.mit.edu/mini-cheetah-first-four-legged-robot-do-backflip</ref>
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof =
| status = Active
}}
== Design and Development ==
The MIT Cheetah displays remarkable adaptability in its functions. Despite being only 20 pounds, the robot can bend and swing its legs wide, allowing it to either walk right side up or upside down<ref>https://news.mit.edu/2019/mit-mini-cheetah-first-four-legged-robot-to-backflip-0304</ref>. This flexibility is a result of its design that prioritizes a wide range of motion.
The robot is further equipped to deal with challenges posed by rough, uneven terrain. It can traverse such landscapes at a pace twice as fast as an average person's walking speed<ref>http://robotics.mit.edu/mini-cheetah-first-four-legged-robot-do-backflip</ref>. The MIT Cheetah’s design, particularly the implementation of a control system that enables agile running, has been largely driven by the "learn-by-experience model". This approach, in contrast to previous designs reliant primarily on human analytical insights, allows the robot to respond quickly to changes in the environment<ref>https://news.mit.edu/2022/3-questions-how-mit-mini-cheetah-learns-run-fast-0317</ref>.
== References ==
<references />
[[Category:Actuators]]
[[Category:Open Source]]
[[Category:Robots]]
b9b5d78c389c713ece94fd172b4de2d7a5c100c5
669
668
2024-04-29T20:26:22Z
Ben
2
wikitext
text/x-wiki
'''MIT Cheetah''' is an open-source quadropedal robot designed using low-inertia actuators. This robot is developed in the lab of Professor Sangbae Kim at MIT<ref>https://www.csail.mit.edu/news/one-giant-leap-mini-cheetah</ref>. The MIT Cheetah is known for its agility and ability to adapt to varying terrain conditions without requiring a terrain map in advance
{{infobox robot
| name = MIT Cheetah
| organization = MIT
| video_link =
| cost =
| height =
| weight = 20 pounds
| speed = Twice the speed of average human walking speed<ref>https://news.mit.edu/2019/mit-mini-cheetah-first-four-legged-robot-to-backflip-0304</ref>,<ref>http://robotics.mit.edu/mini-cheetah-first-four-legged-robot-do-backflip</ref>
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof =
| status = Active
}}
== Design and Development ==
The MIT Cheetah displays remarkable adaptability in its functions. Despite being only 20 pounds, the robot can bend and swing its legs wide, allowing it to either walk right side up or upside down<ref>https://news.mit.edu/2019/mit-mini-cheetah-first-four-legged-robot-to-backflip-0304</ref>. This flexibility is a result of its design that prioritizes a wide range of motion.
The robot is further equipped to deal with challenges posed by rough, uneven terrain. It can traverse such landscapes at a pace twice as fast as an average person's walking speed<ref>http://robotics.mit.edu/mini-cheetah-first-four-legged-robot-do-backflip</ref>. The MIT Cheetah’s design, particularly the implementation of a control system that enables agile running, has been largely driven by the "learn-by-experience model". This approach, in contrast to previous designs reliant primarily on human analytical insights, allows the robot to respond quickly to changes in the environment<ref>https://news.mit.edu/2022/3-questions-how-mit-mini-cheetah-learns-run-fast-0317</ref>.
=== Chips ===
* [https://www.st.com/en/microcontrollers-microprocessors/stm32-32-bit-arm-cortex-mcus.html STM32 32-buit Arm Cortex MCU]
* [https://www.ti.com/product/DRV8323 DRV8323 3-phase smart gate driver]
* [https://www.monolithicpower.com/en/ma702.html MA702 angular position measurement device]
* [https://www.microchip.com/en-us/product/mcp2515 MCP2515 CAN Controller with SPI Interface]
== References ==
<references />
[[Category:Actuators]]
[[Category:Open Source]]
[[Category:Robots]]
84ab0d43b891b17a6428395d5eff1d8f67d0e3a3
K-Scale Intern Onboarding
0
139
670
482
2024-04-29T20:54:09Z
Ben
2
wikitext
text/x-wiki
Congratulations on your internship at K-Scale Labs! We are excited for you to join us.
=== Onboarding ===
* Watch out for an email from Gusto (our HR software), with an official offer letter and instructions on how to onboard you into our system.
* Once you accept, Ben will add you to the system, after which you will have to enter your bank account information in order to be paid.
=== Pre-Internship Checklist ===
* Create a wiki account and mark yourself as an employee (you can use [[User:Ben]] as a template). You'll use your account as the main way to keep track of what you've done over the course of the internship.
* Contribute an article about something you find interesting. See the [[Contributing]] guide.
=== Additional Notes ===
* For travel expenses, please purchase your own flight and keep your receipts so that we can reimburse you later.
6d4cb77692aa636842375cf984f2117d4f3241d6
722
670
2024-04-30T05:06:10Z
108.211.178.220
0
wikitext
text/x-wiki
Congratulations on your internship at K-Scale Labs! We are excited for you to join us.
=== Onboarding ===
* Watch out for an email from Gusto (our HR software), with an official offer letter and instructions on how to onboard you into our system.
* Once you accept, Ben will add you to the system, after which you will have to enter your bank account information in order to be paid.
=== Pre-Internship Checklist ===
* Create a wiki account and mark yourself as an employee (you can use [[User:Ben]] as a template). You'll use your account as the main way to keep track of what you've done over the course of the internship.
* Contribute an article about something you find interesting. See the [[Contributing]] guide.
=== Additional Notes ===
* For travel expenses, please purchase your own flight and keep your receipts so that we can reimburse you later.
[[Category:K-Scale]]
94c2c2fcb73cfa09cbf274788d6c39397ca7e444
Main Page
0
1
671
634
2024-04-29T22:57:21Z
106.213.81.175
0
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|}
88b296ec12507a5a311563b5e7ab5b1a30a42022
695
671
2024-04-29T23:53:52Z
Admin
1
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project to apply [[VESC]] to
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|}
d5370c303d29291fc299f9b33f4f882f16211bd9
696
695
2024-04-30T00:21:53Z
Admin
1
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project to apply [[VESC]] to
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|}
889e376cb04b5e9bafa5b752aac64dcc508d7c1b
700
696
2024-04-30T00:33:41Z
Admin
1
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
|
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|}
87c487d12821493c2fc5526045982ed0f6c8e51b
File:RMD X8-H actuator PCB.jpg
6
161
680
2024-04-29T23:23:18Z
Ben
2
wikitext
text/x-wiki
RMD X8-H actuator PCB
19d8cf3fbc55385c7de6702ffba955fd23b0b153
Controller Area Network (CAN)
0
155
685
648
2024-04-29T23:29:05Z
Ben
2
wikitext
text/x-wiki
The '''Controller Area Network''' (CAN) represents an essential vehicle bus standard. It was explicitly engineered to facilitate communication between microcontrollers and devices without resorting to a central computer. The primary purpose of CAN is to enhance vehicle interconnectivity and foster swift data exchange between various systems within the vehicle.
=== MCP2515 ===
The '''MCP2515''' is an integrated circuit produced by Microchip Technology, constructed to function as a stand-alone CAN controller. It exhibits compatibility with an SPI (Serial Peripheral Interface) which makes it versatile in various applications. Although its primary use is in automotive industries, it is also used in a variety of other control applications.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515 bridges the connection between the CAN protocol and the SPI protocol by receiving CAN messages and translating them into SPI data, allowing the microcontroller to interpret the information. Similarly, it transforms SPI data into CAN messages for transmission.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515's operational details include its capacity to support CAN 2.0A and B, drawing attention to its alignment with established CAN standards and equipping it for both basic and extended frame format usage.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
== References ==
<references />
[[Category:Communication]]
32519f6d3ebd9eec70468759b564eaf366c41136
686
685
2024-04-29T23:30:20Z
Admin
1
Admin moved page [[CAN Bus]] to [[Controller Area Network (CAN)]]: Better name
wikitext
text/x-wiki
The '''Controller Area Network''' (CAN) represents an essential vehicle bus standard. It was explicitly engineered to facilitate communication between microcontrollers and devices without resorting to a central computer. The primary purpose of CAN is to enhance vehicle interconnectivity and foster swift data exchange between various systems within the vehicle.
=== MCP2515 ===
The '''MCP2515''' is an integrated circuit produced by Microchip Technology, constructed to function as a stand-alone CAN controller. It exhibits compatibility with an SPI (Serial Peripheral Interface) which makes it versatile in various applications. Although its primary use is in automotive industries, it is also used in a variety of other control applications.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515 bridges the connection between the CAN protocol and the SPI protocol by receiving CAN messages and translating them into SPI data, allowing the microcontroller to interpret the information. Similarly, it transforms SPI data into CAN messages for transmission.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515's operational details include its capacity to support CAN 2.0A and B, drawing attention to its alignment with established CAN standards and equipping it for both basic and extended frame format usage.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
== References ==
<references />
[[Category:Communication]]
32519f6d3ebd9eec70468759b564eaf366c41136
CAN Bus
0
162
687
2024-04-29T23:30:20Z
Admin
1
Admin moved page [[CAN Bus]] to [[Controller Area Network (CAN)]]: Better name
wikitext
text/x-wiki
#REDIRECT [[Controller Area Network (CAN)]]
e273345fd34037f23609e48e5318803810e08bc4
File:Vescular6.png
6
163
688
2024-04-29T23:49:38Z
Admin
1
wikitext
text/x-wiki
Vescular6 PCB
3d94c008d77600b7f4b57f60e895a2e76dad232c
VESCular6
0
164
689
2024-04-29T23:49:57Z
Admin
1
Created page with "[https://dongilc.gitbook.io/openrobot-inc VESCular6] is an open-source motor controller based on [[VESC]]. [[File:Vescular6.png|thumb|left]]"
wikitext
text/x-wiki
[https://dongilc.gitbook.io/openrobot-inc VESCular6] is an open-source motor controller based on [[VESC]].
[[File:Vescular6.png|thumb|left]]
c43c591d0ebb2f882581cc2b0abf9a4cb3eef9e7
VESC
0
165
690
2024-04-29T23:50:52Z
Admin
1
Created page with "The [https://vesc-project.com/vesc_tool VESC project] is an open-source [[ESC]] implementation<ref>https://vedder.se/2015/01/vesc-open-source-esc/</ref>. === References === <..."
wikitext
text/x-wiki
The [https://vesc-project.com/vesc_tool VESC project] is an open-source [[ESC]] implementation<ref>https://vedder.se/2015/01/vesc-open-source-esc/</ref>.
=== References ===
<references/>
f893322eb5042e1bd6a9c7b256d065123942668f
Electronic Speed Control
0
166
691
2024-04-29T23:51:41Z
Admin
1
Created page with "An ```Electronic Speed Control``` is an electronic circuit that controls and regulates the speed of an electric motor."
wikitext
text/x-wiki
An ```Electronic Speed Control``` is an electronic circuit that controls and regulates the speed of an electric motor.
6f03da7f302cd575dbfb3014276db83bc32b61e4
692
691
2024-04-29T23:51:51Z
Admin
1
wikitext
text/x-wiki
An '''Electronic Speed Control''' is an electronic circuit that controls and regulates the speed of an electric motor.
b0dc2949e65b0a8e26aa405e06cb4f55430411ba
693
692
2024-04-29T23:52:06Z
Admin
1
Admin moved page [[ESC]] to [[Electronic Speed Control]]
wikitext
text/x-wiki
An '''Electronic Speed Control''' is an electronic circuit that controls and regulates the speed of an electric motor.
b0dc2949e65b0a8e26aa405e06cb4f55430411ba
ESC
0
167
694
2024-04-29T23:52:06Z
Admin
1
Admin moved page [[ESC]] to [[Electronic Speed Control]]
wikitext
text/x-wiki
#REDIRECT [[Electronic Speed Control]]
17a0b02bfb36300ffcdde3cc07bc66c2753ceebc
EtherCAT
0
168
697
2024-04-30T00:22:22Z
Admin
1
Created page with "'''EtherCAT (Ethernet for Control Automation Technology)''' is an Ethernet-based fieldbus system developed by Beckhoff Automation. [[Category:Stompy, Expand!]]"
wikitext
text/x-wiki
'''EtherCAT (Ethernet for Control Automation Technology)''' is an Ethernet-based fieldbus system developed by Beckhoff Automation.
[[Category:Stompy, Expand!]]
1982882bad0a3fe09db89fe3d0a5dc990d4f43d9
698
697
2024-04-30T00:31:38Z
Admin
1
wikitext
text/x-wiki
'''EtherCAT (Ethernet for Control Automation Technology)''' is an Ethernet-based fieldbus system developed by Beckhoff Automation.
[[Category:Stompy, Expand!]]
[[Category:Communication]]
05a1905b64bb67eb3b3c710cbe122d2f04ace42f
703
698
2024-04-30T01:04:29Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
EtherCAT (Ethernet for Control Automation Technology) is a reliable, real-time Ethernet-based fieldbus system initially developed and implemented by Beckhoff Automation. Remarkably effective, this system uses a highly efficient communication method that allows data to be transferred from an industrial computer and communicated downstream to each piece of equipment in a daisy-chain configuration. The EtherCAT protocol is open and has gained wide acceptance due to its real-time capabilities and robust performance in industrial environments.
The protocol was developed to offer high performance, real-time capabilities, and cost-effectiveness for systems that range from small to large scale. It finds a vast array of applications in machine controls, as well as in robotics and other fields that require high-speed, reliable communication between controllers and devices.
This system is compatible with standard Ethernet and, therefore, does not require any specific hardware, which makes it a cost-effective solution. EtherCAT's ability to function on existing networks means it can work in tandem with other protocols and can share the same cables, switches, and the like, including industrial Ethernet infrastructure.
== History ==
The EtherCAT technology was initially introduced by Beckhoff Automation in 2003. It was then smoothly transferred to the EtherCAT Technology Group (ETG) and has since been developed and maintained by this group. ETG is an open, non-profit organization that offers membership to manufacturers, developers, and users. The protocol has now established itself globally, becoming a high-performance Ethernet communication system in automation technology due to its real-time capabilities and the ability to connect a large number of devices to a single network.
== Key Features of EtherCAT ==
EtherCAT offers several distinguishing features that contribute to its wide-spread use:
* '''High Speed''': The system can process 1000 distributed I/O in 30 μs, or communicate with 100 servo axes in 100 μs.
* '''Efficiency''': Data is not just passed along the communication line, but devices can directly read and write data as it flows by.
* '''Robustness''': EtherCAT features error detection and flexible topology configurations to attain robustness against failure or data corruption.
* '''Operability''': It allows for online configuration and diagnostics, making it a flexible and operable solution.
* '''Cost-effectiveness''': Requires minimal hardware and can function over standard Ethernet, making its implementation cost-effective compared to other fieldbus systems.
== Applications ==
The efficiency, robustness, and high speed of EtherCAT have made it a preferred choice in a host of applications:
* '''Industrial Automation''': EtherCAT's high-speed, deterministic communication and robust error detection make it widely accepted in industrial automation, such as assembly lines, packing machinery, and material handling.
* '''Robotics''': The perfect choice for complex servo control systems due to its capacity to handle fast control loops and synchronize large amounts of data.
* '''Wind Energy''': EtherCAT is commonly used to control and monitor wind turbines due to its ability to accurately control multiple axes simultaneously.
* '''Medical Technology''': In medical technology, EtherCAT is embraced for applications such as medical imaging, where it can handle the transfer of large amounts of data in real time.
{{infobox protocol
| name = EtherCAT
| organization = Beckhoff Automation
| introduced_date = 2003
| standard = IEC 61158, IEC 61784-2
| network_type = Ethernet-based
| topology = Daisy-chain, star, tree, or mixed arrangements
| speed = Up to 100 Mbit/s
| distance = Up to 100m (over copper cable), Up to 20km (over fiber)
| website_link = https://www.ethercat.org/
}}
[[Category:Communication]]
[[Category:Automation]]
[[Category:Fieldbus systems]]
[[Category:Industrial Ethernet]]
[[Category:Robotics]]
== References ==
<references />
[Assistant Used Tavily Search API with the query "EtherCAT Beckhoff Automation" to gather the relevant background, key features, applications, and other specifics about EtherCAT.]
b2159ee143fb8e96a38600726c7794c4a605386e
725
703
2024-04-30T05:24:50Z
108.211.178.220
0
wikitext
text/x-wiki
EtherCAT (Ethernet for Control Automation Technology) is a reliable, real-time Ethernet-based fieldbus system initially developed and implemented by Beckhoff Automation. Remarkably effective, this system uses a highly efficient communication method that allows data to be transferred from an industrial computer and communicated downstream to each piece of equipment in a daisy-chain configuration. The EtherCAT protocol is open and has gained wide acceptance due to its real-time capabilities and robust performance in industrial environments.
The protocol was developed to offer high performance, real-time capabilities, and cost-effectiveness for systems that range from small to large scale. It finds a vast array of applications in machine controls, as well as in robotics and other fields that require high-speed, reliable communication between controllers and devices.
This system is compatible with standard Ethernet and, therefore, does not require any specific hardware, which makes it a cost-effective solution. EtherCAT's ability to function on existing networks means it can work in tandem with other protocols and can share the same cables, switches, and the like, including industrial Ethernet infrastructure.
== History ==
The EtherCAT technology was initially introduced by Beckhoff Automation in 2003. It was then smoothly transferred to the EtherCAT Technology Group (ETG) and has since been developed and maintained by this group. ETG is an open, non-profit organization that offers membership to manufacturers, developers, and users. The protocol has now established itself globally, becoming a high-performance Ethernet communication system in automation technology due to its real-time capabilities and the ability to connect a large number of devices to a single network.
== Key Features of EtherCAT ==
EtherCAT offers several distinguishing features that contribute to its wide-spread use:
* '''High Speed''': The system can process 1000 distributed I/O in 30 μs, or communicate with 100 servo axes in 100 μs.
* '''Efficiency''': Data is not just passed along the communication line, but devices can directly read and write data as it flows by.
* '''Robustness''': EtherCAT features error detection and flexible topology configurations to attain robustness against failure or data corruption.
* '''Operability''': It allows for online configuration and diagnostics, making it a flexible and operable solution.
* '''Cost-effectiveness''': Requires minimal hardware and can function over standard Ethernet, making its implementation cost-effective compared to other fieldbus systems.
== Applications ==
The efficiency, robustness, and high speed of EtherCAT have made it a preferred choice in a host of applications:
* '''Industrial Automation''': EtherCAT's high-speed, deterministic communication and robust error detection make it widely accepted in industrial automation, such as assembly lines, packing machinery, and material handling.
* '''Robotics''': The perfect choice for complex servo control systems due to its capacity to handle fast control loops and synchronize large amounts of data.
* '''Wind Energy''': EtherCAT is commonly used to control and monitor wind turbines due to its ability to accurately control multiple axes simultaneously.
* '''Medical Technology''': In medical technology, EtherCAT is embraced for applications such as medical imaging, where it can handle the transfer of large amounts of data in real time.
{{infobox protocol
| name = EtherCAT
| organization = Beckhoff Automation
| introduced_date = 2003
| standard = IEC 61158, IEC 61784-2
| network_type = Ethernet-based
| topology = Daisy-chain, star, tree, or mixed arrangements
| speed = Up to 100 Mbit/s
| distance = Up to 100m (over copper cable), Up to 20km (over fiber)
| website_link = https://www.ethercat.org/
}}
[[Category:Communication]]
== References ==
<references />
77e60a24bbd337fc76793e3759ab4fc051782981
Solo Motor Controller
0
169
699
2024-04-30T00:32:54Z
Admin
1
Created page with "The [https://www.solomotorcontrollers.com/ Solo Motor Controller] is a motor controller from Italy. [[Category:Stompy, Expand!]]"
wikitext
text/x-wiki
The [https://www.solomotorcontrollers.com/ Solo Motor Controller] is a motor controller from Italy.
[[Category:Stompy, Expand!]]
9aaf3ff505f9c0b8ec905a1c7197c456ba505b26
702
699
2024-04-30T00:34:19Z
Admin
1
wikitext
text/x-wiki
The [https://www.solomotorcontrollers.com/ Solo Motor Controller] is a motor controller from Italy.
[[Category:Stompy, Expand!]]
[[Category:Actuators]]
e23ca4c8a0ce1cf677f07dc090e904c1aea68561
705
702
2024-04-30T01:05:25Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
The '''Solo Motor Controller''' is a high-performance motor controller designed and manufactured in Italy. It is capable of handling various motor types, including DC, Brushless DC, Permanent Magnet Synchronous (PMSM), and AC induction motors.
{{infobox actuator
|name = Solo Motor Controller
|manufacturer = Solo Motor Controllers
|cost =
|purchase_link = https://www.solomotorcontrollers.com/
|nominal_torque =
|peak_torque =
|weight =
|dimensions =
|gear_ratio =
|voltage = 8 to 60VDC
|cad_link =
|interface =
|gear_type =
}}
== Specifications ==
The Solo Motor Controller is distinguished by its compatibility with a broad spectrum of motors. It possesses advanced features and specifications.
* **Voltage:** The Solo Motor Controller can operate with an input voltage from 8VDC to 60VDC.
* **Continuous Operation:** With a continuous operation capacity of 110Arms, it can deliver up to a 120A DC current.
* **Operation Modes:** The motor controller is designed to operate in a closed-loop method, enabling smooth and efficient motor control.
However, detailed numerical specifications such as weight, dimensions, nominal and peak torque, gear ratio, and specifics of its interface are not readily available from mainstream sources.
== Future Developments ==
The company promises continued advancement in their product line with the upcoming release of the "SOLO MEGA" in June 2023, a 6kW motor controller offering the same diverse motor compatibility but with even more enhanced features.
== References ==
<references />
* [https://www.solomotorcontrollers.com/product/solo-uno/ Solo Motor Controllers - SOLO UNO]
* [https://hackaday.io/project/170494-solo-a-motor-controller-for-all-motors Hackaday.io - SOLO, A Motor Controller for All Motors]
* [https://www.solomotorcontrollers.com/resources/specs-datasheets/ Solo Motor Controllers - Specs & Datasheets]
* [https://www.solomotorcontrollers.com/wp-content/uploads/materials/SOLO_MINI_User_Manual.pdf SOLO MINI User Manual]
[[Category: Actuators]]
62c20ad56998d7099d0e7cb057519b59814c4d7d
707
705
2024-04-30T02:29:56Z
108.211.178.220
0
wikitext
text/x-wiki
The '''Solo Motor Controller''' is a high-performance motor controller designed and manufactured in Italy. It is capable of handling various motor types, including DC, Brushless DC, Permanent Magnet Synchronous (PMSM), and AC induction motors.
{{infobox actuator
|name = Solo Motor Controller
|manufacturer = Solo Motor Controllers
|cost =
|purchase_link = https://www.solomotorcontrollers.com/
|nominal_torque =
|peak_torque =
|weight =
|dimensions =
|gear_ratio =
|voltage = 8 to 60VDC
|cad_link =
|interface =
|gear_type =
}}
== Specifications ==
The Solo Motor Controller is distinguished by its compatibility with a broad spectrum of motors. It possesses advanced features and specifications.
* '''Voltage:''' The Solo Motor Controller can operate with an input voltage from 8VDC to 60VDC.
* '''Continuous Operation:''' With a continuous operation capacity of 110Arms, it can deliver up to a 120A DC current.
* '''Operation Modes:''' The motor controller is designed to operate in a closed-loop method, enabling smooth and efficient motor control.
However, detailed numerical specifications such as weight, dimensions, nominal and peak torque, gear ratio, and specifics of its interface are not readily available from mainstream sources.
== Future Developments ==
The company promises continued advancement in their product line with the upcoming release of the "SOLO MEGA" in June 2023, a 6kW motor controller offering the same diverse motor compatibility but with even more enhanced features.
== References ==
<references />
* [https://www.solomotorcontrollers.com/product/solo-uno/ Solo Motor Controllers - SOLO UNO]
* [https://hackaday.io/project/170494-solo-a-motor-controller-for-all-motors Hackaday.io - SOLO, A Motor Controller for All Motors]
* [https://www.solomotorcontrollers.com/resources/specs-datasheets/ Solo Motor Controllers - Specs & Datasheets]
* [https://www.solomotorcontrollers.com/wp-content/uploads/materials/SOLO_MINI_User_Manual.pdf SOLO MINI User Manual]
[[Category: Actuators]]
93234a2a359ccaea7083234ed53d7eaa5f7e8993
ODrive
0
170
701
2024-04-30T00:34:08Z
Admin
1
Created page with "The [https://odriverobotics.com/ ODrive] is a precision motor controller. [[Category:Stompy, Expand!]] [[Category:Actuators]]"
wikitext
text/x-wiki
The [https://odriverobotics.com/ ODrive] is a precision motor controller.
[[Category:Stompy, Expand!]]
[[Category:Actuators]]
3627f5ae4e6447e8496b2bee69d09a2866e4bf45
704
701
2024-04-30T01:04:57Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
Begin content expansion by calling the search function to procure further data about the ODrive Motor Controller.
'''ODrive''' is a precision motor controller that is highly applicable in robotics domains, CNC machines, and other areas demanding high-performance motor control. It provides accurate control over hobby-grade and industrial electric motors.
Here is detailed information procured through the Tavily Search API:
==Product Description==
ODrive is primarily used for the control of electric motors with precision. The device is well-suited for accurate control of both hobby-grade and industrial motors. The controller has been specially designed for applications demanding high performance. The flexibility of the ODrive controller allows it to be used in a variety of applications including, but not limited to, robotics and CNC machines.
==Industrial Application==
The ODrive motor controller finds its application in numerous sectors, particularly in robotics and CNC machines. This is mainly due to its flexibility, which facilitates control over a variety of motors ranging from hobby-grade to industrial ones. Furthermore, the precision and high performance offered by the controller make it a preferred choice for professionals.
==Technical Specifications==
While the exact technical specifications of the ODrive motor controller can vary depending on the model, the controller is generally characterized by the following features:
- High performance motor control: Ensures precision and accuracy.
- Flexible interface: Facilitates easy integration with a variety of motors.
- Compatible with various types of electric motors: Ensures wide application.
{{infobox actuator
| name = ODrive Motor Controller
| manufacturer = ODrive Robotics
| cost = To be confirmed
| purchase_link = https://odriverobotics.com/
| nominal_torque = To be confirmed
| peak_torque = To be confirmed
| weight = To be confirmed
| dimensions = To be confirmed
| gear_ratio = N/A
| voltage = To be confirmed
| cad_link = To be confirmed
| interface = To be confirmed
| gear_type = N/A
}}
== References ==
<references />
[[Category:]]
[[Category:Actuators]]
65d9479e430785c1482d650f5e9be8dc911243db
714
704
2024-04-30T03:53:08Z
108.211.178.220
0
wikitext
text/x-wiki
'''ODrive''' is a precision motor controller that is highly applicable in robotics domains, CNC machines, and other areas demanding high-performance motor control. It provides accurate control over hobby-grade and industrial electric motors.
{{infobox actuator
| name = ODrive Motor Controller
| manufacturer = ODrive Robotics
| cost =
| purchase_link = https://odriverobotics.com/
| nominal_torque =
| peak_torque =
| weight =
| dimensions =
| gear_ratio =
| voltage =
| cad_link =
| interface =
| gear_type =
}}
== Product Description ==
ODrive is primarily used for the control of electric motors with precision. The device is well-suited for accurate control of both hobby-grade and industrial motors. The controller has been specially designed for applications demanding high performance. The flexibility of the ODrive controller allows it to be used in a variety of applications including, but not limited to, robotics and CNC machines.
== Industrial Application ==
The ODrive motor controller finds its application in numerous sectors, particularly in robotics and CNC machines. This is mainly due to its flexibility, which facilitates control over a variety of motors ranging from hobby-grade to industrial ones. Furthermore, the precision and high performance offered by the controller make it a preferred choice for professionals.
== Technical Specifications ==
While the exact technical specifications of the ODrive motor controller can vary depending on the model, the controller is generally characterized by the following features:
- High performance motor control: Ensures precision and accuracy.
- Flexible interface: Facilitates easy integration with a variety of motors.
- Compatible with various types of electric motors: Ensures wide application.
== References ==
<references />
[[Category:]]
[[Category:Actuators]]
a98b91e32c1d8ccf85afb0608a91351fe7adfeca
715
714
2024-04-30T03:53:18Z
108.211.178.220
0
wikitext
text/x-wiki
'''ODrive''' is a precision motor controller that is highly applicable in robotics domains, CNC machines, and other areas demanding high-performance motor control. It provides accurate control over hobby-grade and industrial electric motors.
{{infobox actuator
| name = ODrive Motor Controller
| manufacturer = ODrive Robotics
| cost =
| purchase_link = https://odriverobotics.com/
| nominal_torque =
| peak_torque =
| weight =
| dimensions =
| gear_ratio =
| voltage =
| cad_link =
| interface =
| gear_type =
}}
== Product Description ==
ODrive is primarily used for the control of electric motors with precision. The device is well-suited for accurate control of both hobby-grade and industrial motors. The controller has been specially designed for applications demanding high performance. The flexibility of the ODrive controller allows it to be used in a variety of applications including, but not limited to, robotics and CNC machines.
== Industrial Application ==
The ODrive motor controller finds its application in numerous sectors, particularly in robotics and CNC machines. This is mainly due to its flexibility, which facilitates control over a variety of motors ranging from hobby-grade to industrial ones. Furthermore, the precision and high performance offered by the controller make it a preferred choice for professionals.
== Technical Specifications ==
While the exact technical specifications of the ODrive motor controller can vary depending on the model, the controller is generally characterized by the following features:
* High performance motor control: Ensures precision and accuracy.
* Flexible interface: Facilitates easy integration with a variety of motors.
* Compatible with various types of electric motors: Ensures wide application.
== References ==
<references />
[[Category:]]
[[Category:Actuators]]
6a3f4dc8028d469c4f2a7e01062a1e6b61d09c93
Building a PCB
0
41
706
651
2024-04-30T01:43:57Z
108.211.178.220
0
Add sections
wikitext
text/x-wiki
Walk-through and notes regarding how to design and ship a PCB.
== Designing with atopile ==
[[atopile]] enables code-defined pcb design. Follow atopile's [https://atopile.io/getting-started/ getting-started] guide to set up your project.
An example atopile PCB project is provided by the K-Scale Labs team [https://github.com/kscalelabs/atopile-pcb-example here]
PCB visualization software:
Connecting Traces
Exporting File for Manufacturing
Exporting BoM
== Ordering a PCB ==
Trusted low production PCB manufacturing companies:
* PCBWay
* JLCPCB
* SeeedStudio
Further PCB manufacturers and price comparisons for your specific project can be found [https://pcbshopper.com/ here]
== Related Articles ==
* [[atopile]]
[[Category: Hardware]]
[[Category: Guides]]
[[Category: Electronics]]
8368d1253a0a0a872a73a5aebdd8005744105bcf
708
706
2024-04-30T02:42:08Z
108.211.178.220
0
add initial sections
wikitext
text/x-wiki
Walk-through and notes regarding how to design and ship a PCB.
== Designing with atopile ==
[[atopile]] enables code-defined pcb design. Follow atopile's [https://atopile.io/getting-started/ getting-started] guide to set up your project.
An example atopile PCB project is provided by the K-Scale Labs team [https://github.com/kscalelabs/atopile-pcb-example here]
PCB visualization software:
Connecting Traces
Exporting File for Manufacturing
-Gerber Files
-Drill Files
-Map Files
-BoM
-CPL
== Ordering a PCB ==
Trusted low production PCB manufacturing companies:
* PCBWay
* JLCPCB
* SeeedStudio
Further PCB manufacturers and price comparisons for your specific project can be found [https://pcbshopper.com/ here]
== Related Articles ==
* [[atopile]]
[[Category: Hardware]]
[[Category: Guides]]
[[Category: Electronics]]
b2fde67a40b08a23791a03bf0c5697d6ac34ae3f
709
708
2024-04-30T02:42:27Z
108.211.178.220
0
wikitext
text/x-wiki
Walk-through and notes regarding how to design and ship a PCB.
== Designing with atopile ==
[[atopile]] enables code-defined pcb design. Follow atopile's [https://atopile.io/getting-started/ getting-started] guide to set up your project.
An example atopile PCB project is provided by the K-Scale Labs team [https://github.com/kscalelabs/atopile-pcb-example here]
PCB visualization software:
Connecting Traces
Exporting File for Manufacturing
-Gerber Files
-Drill Files
-Map Files
-BoM
-CPL
== Ordering a PCB ==
Trusted low production PCB manufacturing companies:
* PCBWay
* JLCPCB
* SeeedStudio
Further PCB manufacturers and price comparisons for your specific project can be found [https://pcbshopper.com/ here]
== Related Articles ==
* [[atopile]]
[[Category: Hardware]]
[[Category: Guides]]
[[Category: Electronics]]
df535f7d00946c3a90f42a7467d9fb766f65e2c0
710
709
2024-04-30T02:42:56Z
108.211.178.220
0
wikitext
text/x-wiki
Walk-through and notes regarding how to design and ship a PCB.
== Designing with atopile ==
[[atopile]] enables code-defined pcb design. Follow atopile's [https://atopile.io/getting-started/ getting-started] guide to set up your project.
An example atopile PCB project is provided by the K-Scale Labs team [https://github.com/kscalelabs/atopile-pcb-example here]
PCB visualization software:
Connecting Traces
Exporting File for Manufacturing
* Gerber Files
* Drill Files
* Map Files
* BoM
* CPL
== Ordering a PCB ==
Trusted low production PCB manufacturing companies:
* PCBWay
* JLCPCB
* SeeedStudio
Further PCB manufacturers and price comparisons for your specific project can be found [https://pcbshopper.com/ here]
== Related Articles ==
* [[atopile]]
[[Category: Hardware]]
[[Category: Guides]]
[[Category: Electronics]]
19e0cac56f251c4209539ea4a663672e7fa7013f
User:Matt
2
171
711
2024-04-30T02:43:40Z
Matt
16
Initial Commit
wikitext
text/x-wiki
Matt
0f9fe690f38da67968280971584cf9c16541f07b
712
711
2024-04-30T02:44:24Z
Ben
2
wikitext
text/x-wiki
Matt?
666f830a57df374086bb652c333454e02fef3538
713
712
2024-04-30T02:44:58Z
Ben
2
wikitext
text/x-wiki
Matt
[[Category:Mazda Miata Enthusiasts]]
38616250f26fd5adbcd461cf412acf9250b6fa75
File:Trace Tool.png
6
172
716
2024-04-30T04:49:50Z
Matt
16
wikitext
text/x-wiki
KiCad Trace Tool
a0a666c2b5b3bec72270c3521c031d20d5172346
K-Scale Cluster
0
16
717
502
2024-04-30T05:02:28Z
108.211.178.220
0
wikitext
text/x-wiki
The K-Scale Labs clusters are shared cluster for robotics research. This page contains notes on how to access the clusters.
=== Onboarding ===
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config</code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
After setting this up, you can use the command <code>ssh cluster</code> to directly connect.
You can also access via VS Code. Tutorial of using <code>ssh</code> in VS Code is [https://code.visualstudio.com/docs/remote/ssh-tutorial here].
Please inform us if you have any issues!
=== Notes ===
* You may need to restart <code>ssh</code> to get it working.
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
=== Cluster 1 ===
=== Cluster 2 ===
The cluster has 8 available nodes (each with 8 GPUs):
<syntaxhighlight lang="text">
compute-permanent-node-68
compute-permanent-node-285
compute-permanent-node-493
compute-permanent-node-625
compute-permanent-node-626
compute-permanent-node-749
compute-permanent-node-801
compute-permanent-node-580
</syntaxhighlight>
When you ssh-in, you log in to the bastion node pure-caribou-bastion from which you can log in to any other node where you can test your code.
55b46b4aa8353e0ee2f7659e997974e774b707b6
718
717
2024-04-30T05:05:24Z
108.211.178.220
0
wikitext
text/x-wiki
The K-Scale Labs clusters are shared cluster for robotics research. This page contains notes on how to access the clusters.
=== Onboarding ===
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config</code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
After setting this up, you can use the command <code>ssh cluster</code> to directly connect.
You can also access via VS Code. Tutorial of using <code>ssh</code> in VS Code is [https://code.visualstudio.com/docs/remote/ssh-tutorial here].
Please inform us if you have any issues!
=== Notes ===
* You may need to restart <code>ssh</code> to get it working.
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
=== Cluster 1 ===
=== Cluster 2 ===
The cluster has 8 available nodes (each with 8 GPUs):
<syntaxhighlight lang="text">
compute-permanent-node-68
compute-permanent-node-285
compute-permanent-node-493
compute-permanent-node-625
compute-permanent-node-626
compute-permanent-node-749
compute-permanent-node-801
compute-permanent-node-580
</syntaxhighlight>
When you ssh-in, you log in to the bastion node pure-caribou-bastion from which you can log in to any other node where you can test your code.
[[Category:K-Scale]]
0e3d4eb093b5e27ee04f14052c610f64b7545786
Category:K-Scale
14
173
719
2024-04-30T05:05:35Z
108.211.178.220
0
Created page with "Any documents related to K-Scale"
wikitext
text/x-wiki
Any documents related to K-Scale
962f6bee8a7d9a538e9e0aa0a1038cee4a0a6a74
K-Scale Labs
0
5
720
201
2024-04-30T05:05:56Z
108.211.178.220
0
wikitext
text/x-wiki
[[File:Logo.png|right|200px|thumb]]
[https://kscale.dev/ K-Scale Labs] is building an open-source humanoid robot called [[Stompy]].
{{infobox company
| name = K-Scale Labs
| country = United States
| website_link = https://kscale.dev/
| robots = [[Stompy]]
}}
[[Category:Companies]]
[[Category:K-Scale]]
0003fc929362ae70b228c059e5709b560bdb5842
Stompy
0
2
721
660
2024-04-30T05:06:03Z
108.211.178.220
0
wikitext
text/x-wiki
[[File:Stompy.jpg|right|300px|thumb]]
{{infobox robot
| name = Stompy
| organization = [[K-Scale Labs]]
| cost = USD 10,000
}}
Stompy is an open-source humanoid robot developed by [[K-Scale Labs]]. Here are some relevant links:
* [[Stompy To-Do List]]
* [[Stompy Build Guide]]
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
= Simulation =
For the latest simulation artifacts, see [https://kscale.dev/ the website].
[[Category:Robots]]
[[Category:Open Source]]
[[Category:K-Scale]]
50da2f5b2f67b5b4e961ff4826f8116d56978031
File:JLCPCB BOM Format.png
6
174
723
2024-04-30T05:22:39Z
Matt
16
wikitext
text/x-wiki
JLCPCB BOM Format Requirement
8a4d49ab05368e0ba869f30ccfc60c4b696ddb7a
Jetson Flashing Walkthrough
0
175
724
2024-04-30T05:23:36Z
108.211.178.220
0
Created page with "This document provides instructions on how to flash a Jetson Orin Nano or AGX. [[Category: Stompy, Expand!]]"
wikitext
text/x-wiki
This document provides instructions on how to flash a Jetson Orin Nano or AGX.
[[Category: Stompy, Expand!]]
82fb08432cc354db363d0df944f49991e483dcc3
Template:Infobox protocol
10
176
726
2024-04-30T05:26:53Z
108.211.178.220
0
Created page with "{{infobox | name = {{{name}}} | key1 = Name | value1 = {{{name}}} | key2 = Organization | value2 = {{{organization|}}} | key3 = Introduced | value3 = {{{introduced_date|}}} |..."
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Organization
| value2 = {{{organization|}}}
| key3 = Introduced
| value3 = {{{introduced_date|}}}
| key4 = Standard
| value4 = {{{standard|}}}
| key5 = Network Type
| value5 = {{{network_type|}}}
| key6 = Topology
| value6 = {{{topology|}}}
| key7 = Speed
| value7 = {{{speed|}}}
| key8 = Distance
| value8 = {{{distance|}}}
| key9 = Website
| value9 = {{#if: {{{website_link|}}} | [{{{website_link}}} Website] }}
}}
eefccfbb6d6a7cbc074071d9f2a7b2d3456bbf93
EtherCAT
0
168
727
725
2024-04-30T05:27:08Z
108.211.178.220
0
wikitext
text/x-wiki
EtherCAT (Ethernet for Control Automation Technology) is a reliable, real-time Ethernet-based fieldbus system initially developed and implemented by Beckhoff Automation. Remarkably effective, this system uses a highly efficient communication method that allows data to be transferred from an industrial computer and communicated downstream to each piece of equipment in a daisy-chain configuration. The EtherCAT protocol is open and has gained wide acceptance due to its real-time capabilities and robust performance in industrial environments.
{{infobox protocol
| name = EtherCAT
| organization = Beckhoff Automation
| introduced_date = 2003
| standard = IEC 61158, IEC 61784-2
| network_type = Ethernet-based
| topology = Daisy-chain, star, tree, or mixed arrangements
| speed = Up to 100 Mbit/s
| distance = Up to 100m (over copper cable), Up to 20km (over fiber)
| website_link = https://www.ethercat.org/
}}
The protocol was developed to offer high performance, real-time capabilities, and cost-effectiveness for systems that range from small to large scale. It finds a vast array of applications in machine controls, as well as in robotics and other fields that require high-speed, reliable communication between controllers and devices.
This system is compatible with standard Ethernet and, therefore, does not require any specific hardware, which makes it a cost-effective solution. EtherCAT's ability to function on existing networks means it can work in tandem with other protocols and can share the same cables, switches, and the like, including industrial Ethernet infrastructure.
== History ==
The EtherCAT technology was initially introduced by Beckhoff Automation in 2003. It was then smoothly transferred to the EtherCAT Technology Group (ETG) and has since been developed and maintained by this group. ETG is an open, non-profit organization that offers membership to manufacturers, developers, and users. The protocol has now established itself globally, becoming a high-performance Ethernet communication system in automation technology due to its real-time capabilities and the ability to connect a large number of devices to a single network.
== Key Features of EtherCAT ==
EtherCAT offers several distinguishing features that contribute to its wide-spread use:
* '''High Speed''': The system can process 1000 distributed I/O in 30 μs, or communicate with 100 servo axes in 100 μs.
* '''Efficiency''': Data is not just passed along the communication line, but devices can directly read and write data as it flows by.
* '''Robustness''': EtherCAT features error detection and flexible topology configurations to attain robustness against failure or data corruption.
* '''Operability''': It allows for online configuration and diagnostics, making it a flexible and operable solution.
* '''Cost-effectiveness''': Requires minimal hardware and can function over standard Ethernet, making its implementation cost-effective compared to other fieldbus systems.
== Applications ==
The efficiency, robustness, and high speed of EtherCAT have made it a preferred choice in a host of applications:
* '''Industrial Automation''': EtherCAT's high-speed, deterministic communication and robust error detection make it widely accepted in industrial automation, such as assembly lines, packing machinery, and material handling.
* '''Robotics''': The perfect choice for complex servo control systems due to its capacity to handle fast control loops and synchronize large amounts of data.
* '''Wind Energy''': EtherCAT is commonly used to control and monitor wind turbines due to its ability to accurately control multiple axes simultaneously.
* '''Medical Technology''': In medical technology, EtherCAT is embraced for applications such as medical imaging, where it can handle the transfer of large amounts of data in real time.
[[Category:Communication]]
== References ==
<references />
611ca1826f8ab54efafc70c088fc71d8f17f5a7d
728
727
2024-04-30T05:27:30Z
108.211.178.220
0
wikitext
text/x-wiki
EtherCAT (Ethernet for Control Automation Technology) is a reliable, real-time Ethernet-based fieldbus system initially developed and implemented by Beckhoff Automation. Remarkably effective, this system uses a highly efficient communication method that allows data to be transferred from an industrial computer and communicated downstream to each piece of equipment in a daisy-chain configuration. The EtherCAT protocol is open and has gained wide acceptance due to its real-time capabilities and robust performance in industrial environments.
{{infobox protocol
| name = EtherCAT
| organization = Beckhoff Automation
| introduced_date = 2003
| standard = IEC 61158, IEC 61784-2
| network_type = Ethernet-based
| topology = Daisy-chain, star, tree, or mixed arrangements
| speed = Up to 100 Mbit/s
| distance = Up to 100m (over copper cable), Up to 20km (over fiber)
| website_link = https://www.ethercat.org/
}}
The protocol was developed to offer high performance, real-time capabilities, and cost-effectiveness for systems that range from small to large scale. It finds a vast array of applications in machine controls, as well as in robotics and other fields that require high-speed, reliable communication between controllers and devices.
This system is compatible with standard Ethernet and, therefore, does not require any specific hardware, which makes it a cost-effective solution. EtherCAT's ability to function on existing networks means it can work in tandem with other protocols and can share the same cables, switches, and the like, including industrial Ethernet infrastructure.
== History ==
The EtherCAT technology was initially introduced by Beckhoff Automation in 2003. It was then smoothly transferred to the EtherCAT Technology Group (ETG) and has since been developed and maintained by this group. ETG is an open, non-profit organization that offers membership to manufacturers, developers, and users. The protocol has now established itself globally, becoming a high-performance Ethernet communication system in automation technology due to its real-time capabilities and the ability to connect a large number of devices to a single network.
== Key Features of EtherCAT ==
EtherCAT offers several distinguishing features that contribute to its wide-spread use:
* '''High Speed''': The system can process 1000 distributed I/O in 30 μs, or communicate with 100 servo axes in 100 μs.
* '''Efficiency''': Data is not just passed along the communication line, but devices can directly read and write data as it flows by.
* '''Robustness''': EtherCAT features error detection and flexible topology configurations to attain robustness against failure or data corruption.
* '''Operability''': It allows for online configuration and diagnostics, making it a flexible and operable solution.
* '''Cost-effectiveness''': Requires minimal hardware and can function over standard Ethernet, making its implementation cost-effective compared to other fieldbus systems.
== Applications ==
The efficiency, robustness, and high speed of EtherCAT have made it a preferred choice in a host of applications:
* '''Industrial Automation''': EtherCAT's high-speed, deterministic communication and robust error detection make it widely accepted in industrial automation, such as assembly lines, packing machinery, and material handling.
* '''Robotics''': The perfect choice for complex servo control systems due to its capacity to handle fast control loops and synchronize large amounts of data.
* '''Wind Energy''': EtherCAT is commonly used to control and monitor wind turbines due to its ability to accurately control multiple axes simultaneously.
* '''Medical Technology''': In medical technology, EtherCAT is embraced for applications such as medical imaging, where it can handle the transfer of large amounts of data in real time.
[[Category:Communication]]
3e2fe1db9ed55dfee81e56797e7b8af3dd66c5ed
File:Pos settings.png
6
177
729
2024-04-30T05:31:48Z
Matt
16
wikitext
text/x-wiki
settings for getting the CPL
a7e600164c8cdc35eb38977780ac8d6d5ffa9959
File:JLCPCB CPL.png
6
178
730
2024-04-30T05:36:27Z
Matt
16
wikitext
text/x-wiki
JLCPCB Example CPL
33a19765ef9ab790c0c3359a55c84b2e80e130b3
Building a PCB
0
41
731
710
2024-04-30T05:41:54Z
Matt
16
Add all information
wikitext
text/x-wiki
Walk-through and notes regarding how to design and ship a PCB.
== Designing with atopile ==
[[atopile]] enables code-defined pcb design. Follow atopile's [https://atopile.io/getting-started/ getting-started] guide to set up your project.
An example atopile PCB project is provided by the K-Scale Labs team [https://github.com/kscalelabs/atopile-pcb-example here]
=== Importing into KiCad ===
After completing the atopile setup and building your atopile project, you will need to import the build into [https://www.kicad.org/ KiCad].
To import your design into KiCad,
# Open <your-project>/elec/layout/default/<your-project-name>.kicad_pro with KiCad.
# Delete all of the pcb board outlines you do not want to have (By default there are 3 available options to choose from)
# Go to File->Import->Netlist...
# In the Import Netlist pop-up, select your .net file to import, typically located at <your-project>/build/default.net
# Click "Load and Test Netlist"
# Click "Update PCB"
# Click "Close"
# Select anywhere on the screen where to place the components (you can move them later)
atopile automatically connects the necessary components together, but you will still have to manually create your preferred layout and draw the connecting traces/routes (KiCad makes this process very simple)
=== Connecting Traces ===
After positioning your board components, you will have to connect them using the KiCad router tool, seen circled in red below:
[[File:Trace Tool.png|center|thumb|KiCad Trace Tool]]
To use this tool, simply select the router tool icon on the right-hand side of the KiCad program window and select a components pin to begin. The KiCad program will give you a visualization of which components you should trace towards.
Connect all traces and verify no components have been left unconnected.
=== Exporting Files for Manufacturing ===
There are multiple files required to get a PCB manufactured. Each manufacturer may have different requirements.
For this example, we will be using [https://jlcpcb.com/ JCLPCB's] PCB manufacturing services
JCLPCB Requires:
* Gerber Files
* Drill Files
* Map Files
* BoM
* CPL
==== Exporting Gerber, Drill, and Map Files ====
For a detailed instruction follow JLCPCB's KiCad export instructions [https://jlcpcb.com/help/article/362-how-to-generate-gerber-and-drill-files-in-kicad-7 here]
==== Exporting BoM (Bill of Materials) ====
atopile will automatically make the BoM for you, although you may need to reformat the header & information to meet the requirements of your manufacturer.
The JLCPCP format can be seen below:
[[File:JLCPCB BOM Format.png|center|thumb|JLCPCB BOM Requirement]]
atopile's BoM file can be found in the build directory, typically called "default.csv" (<your-project>/build/default.net)
==== Exporting CPL (Component Placement List) ====
KiCad allows for quick and easy CPL exporting, although you will have to reformat your information to fit your manufacturers requirements
To export a CPL from your KiCad project:
# Go to File -> Fabrication Outputs -> Component Placement
# Select proper output directory
# Use CSV, Millimeters, and Separate files for front, back settings:
[[File:Pos settings.png|center|thumb|Settings for CPL Generation in KiCad]]
# Click "Generate Position File"
# Fix output file to match your manufacturers requirements (JLCPCB example provided)
[[File:JLCPCB CPL.png|center|thumb|JLCPCB CPL Example Format]]
== Ordering a PCB ==
Trusted low production PCB manufacturing companies:
* PCBWay
* JLCPCB
* SeeedStudio
Further PCB manufacturers and price comparisons for your specific project can be found [https://pcbshopper.com/ here]
== Related Articles ==
* [[atopile]]
[[Category: Hardware]]
[[Category: Guides]]
[[Category: Electronics]]
a7a7d8d603619ba8cd9d4647d53351a91a6e2189
732
731
2024-04-30T05:44:06Z
Matt
16
space
wikitext
text/x-wiki
Walk-through and notes regarding how to design and ship a PCB.
== Designing with atopile ==
[[atopile]] enables code-defined pcb design. Follow atopile's [https://atopile.io/getting-started/ getting-started] guide to set up your project.
An example atopile PCB project is provided by the K-Scale Labs team [https://github.com/kscalelabs/atopile-pcb-example here]
=== Importing into KiCad ===
After completing the atopile setup and building your atopile project, you will need to import the build into [https://www.kicad.org/ KiCad].
To import your design into KiCad,
# Open <your-project>/elec/layout/default/<your-project-name>.kicad_pro with KiCad.
# Delete all of the pcb board outlines you do not want to have (By default there are 3 available options to choose from)
# Go to File->Import->Netlist...
# In the Import Netlist pop-up, select your .net file to import, typically located at <your-project>/build/default.net
# Click "Load and Test Netlist"
# Click "Update PCB"
# Click "Close"
# Select anywhere on the screen where to place the components (you can move them later)
atopile automatically connects the necessary components together, but you will still have to manually create your preferred layout and draw the connecting traces/routes (KiCad makes this process very simple)
=== Connecting Traces ===
After positioning your board components, you will have to connect them using the KiCad router tool, seen circled in red below:
[[File:Trace Tool.png|center|thumb|KiCad Trace Tool]]
To use this tool, simply select the router tool icon on the right-hand side of the KiCad program window and select a components pin to begin.
The KiCad program will give you a visualization of which components you should trace towards.
Connect all traces and verify no components have been left unconnected.
=== Exporting Files for Manufacturing ===
There are multiple files required to get a PCB manufactured. Each manufacturer may have different requirements.
For this example, we will be using [https://jlcpcb.com/ JCLPCB's] PCB manufacturing services
JCLPCB Requires:
* Gerber Files
* Drill Files
* Map Files
* BoM
* CPL
==== Exporting Gerber, Drill, and Map Files ====
For a detailed instruction follow JLCPCB's KiCad export instructions [https://jlcpcb.com/help/article/362-how-to-generate-gerber-and-drill-files-in-kicad-7 here]
==== Exporting BoM (Bill of Materials) ====
atopile will automatically make the BoM for you, although you may need to reformat the header & information to meet the requirements of your manufacturer.
The JLCPCP format can be seen below:
[[File:JLCPCB BOM Format.png|center|thumb|JLCPCB BOM Requirement]]
atopile's BoM file can be found in the build directory, typically called "default.csv" (<your-project>/build/default.net)
==== Exporting CPL (Component Placement List) ====
KiCad allows for quick and easy CPL exporting, although you will have to reformat your information to fit your manufacturers requirements
To export a CPL from your KiCad project:
# Go to File -> Fabrication Outputs -> Component Placement
# Select proper output directory
# Use CSV, Millimeters, and Separate files for front, back settings:
[[File:Pos settings.png|center|thumb|Settings for CPL Generation in KiCad]]
# Click "Generate Position File"
# Fix output file to match your manufacturers requirements (JLCPCB example provided)
[[File:JLCPCB CPL.png|center|thumb|JLCPCB CPL Example Format]]
== Ordering a PCB ==
Trusted low production PCB manufacturing companies:
* PCBWay
* JLCPCB
* SeeedStudio
Further PCB manufacturers and price comparisons for your specific project can be found [https://pcbshopper.com/ here]
== Related Articles ==
* [[atopile]]
[[Category: Hardware]]
[[Category: Guides]]
[[Category: Electronics]]
11af264c9970a896215de47fc31053cb348980aa
733
732
2024-04-30T05:44:46Z
108.211.178.220
0
wikitext
text/x-wiki
Walk-through and notes regarding how to design and ship a PCB.
== Designing with atopile ==
[[atopile]] enables code-defined pcb design. Follow atopile's [https://atopile.io/getting-started/ getting-started] guide to set up your project.
An example atopile PCB project is provided by the K-Scale Labs team [https://github.com/kscalelabs/atopile-pcb-example here]
=== Importing into KiCad ===
After completing the atopile setup and building your atopile project, you will need to import the build into [https://www.kicad.org/ KiCad].
To import your design into KiCad,
# Open <your-project>/elec/layout/default/<your-project-name>.kicad_pro with KiCad.
# Delete all of the pcb board outlines you do not want to have (By default there are 3 available options to choose from)
# Go to File->Import->Netlist...
# In the Import Netlist pop-up, select your .net file to import, typically located at <your-project>/build/default.net
# Click "Load and Test Netlist"
# Click "Update PCB"
# Click "Close"
# Select anywhere on the screen where to place the components (you can move them later)
atopile automatically connects the necessary components together, but you will still have to manually create your preferred layout and draw the connecting traces/routes (KiCad makes this process very simple)
=== Connecting Traces ===
After positioning your board components, you will have to connect them using the KiCad router tool, seen circled in red below:
[[File:Trace Tool.png|center|thumb|KiCad Trace Tool]]
To use this tool, simply select the router tool icon on the right-hand side of the KiCad program window and select a components pin to begin.
The KiCad program will give you a visualization of which components you should trace towards.
Connect all traces and verify no components have been left unconnected.
=== Exporting Files for Manufacturing ===
There are multiple files required to get a PCB manufactured. Each manufacturer may have different requirements.
For this example, we will be using [https://jlcpcb.com/ JCLPCB's] PCB manufacturing services
JCLPCB Requires:
* Gerber Files
* Drill Files
* Map Files
* BoM
* CPL
==== Exporting Gerber, Drill, and Map Files ====
For a detailed instruction follow JLCPCB's KiCad export instructions [https://jlcpcb.com/help/article/362-how-to-generate-gerber-and-drill-files-in-kicad-7 here]
==== Exporting BoM (Bill of Materials) ====
atopile will automatically make the BoM for you, although you may need to reformat the header & information to meet the requirements of your manufacturer.
The JLCPCP format can be seen below:
[[File:JLCPCB BOM Format.png|center|500px|thumb|JLCPCB BOM Requirement]]
atopile's BoM file can be found in the build directory, typically called "default.csv" (<your-project>/build/default.net)
==== Exporting CPL (Component Placement List) ====
KiCad allows for quick and easy CPL exporting, although you will have to reformat your information to fit your manufacturers requirements
To export a CPL from your KiCad project:
# Go to File -> Fabrication Outputs -> Component Placement
# Select proper output directory
# Use CSV, Millimeters, and Separate files for front, back settings:
[[File:Pos settings.png|center|500px|thumb|Settings for CPL Generation in KiCad]]
# Click "Generate Position File"
# Fix output file to match your manufacturers requirements (JLCPCB example provided)
[[File:JLCPCB CPL.png|center|500px|thumb|JLCPCB CPL Example Format]]
== Ordering a PCB ==
Trusted low production PCB manufacturing companies:
* PCBWay
* JLCPCB
* SeeedStudio
Further PCB manufacturers and price comparisons for your specific project can be found [https://pcbshopper.com/ here]
== Related Articles ==
* [[atopile]]
[[Category: Hardware]]
[[Category: Guides]]
[[Category: Electronics]]
f5b357e6ddbfe774b9f34a46d04c172ba472db00
734
733
2024-04-30T05:45:08Z
Matt
16
Gram
wikitext
text/x-wiki
Walk-through and notes regarding how to design and ship a PCB.
== Designing with atopile ==
[[atopile]] enables code-defined pcb design. Follow atopile's [https://atopile.io/getting-started/ getting-started] guide to set up your project.
An example atopile PCB project is provided by the K-Scale Labs team [https://github.com/kscalelabs/atopile-pcb-example here]
=== Importing into KiCad ===
After completing the atopile setup and building your atopile project, you will need to import the build into [https://www.kicad.org/ KiCad].
To import your design into KiCad,
# Open <your-project>/elec/layout/default/<your-project-name>.kicad_pro with KiCad.
# Delete all of the pcb board outlines you do not want to have (By default there are 3 available options to choose from)
# Go to File->Import->Netlist...
# In the Import Netlist pop-up, select your .net file to import, typically located at <your-project>/build/default.net
# Click "Load and Test Netlist"
# Click "Update PCB"
# Click "Close"
# Select anywhere on the screen where to place the components (you can move them later)
atopile automatically connects the necessary components together, but you will still have to manually create your preferred layout and draw the connecting traces/routes (KiCad makes this process very simple)
=== Connecting Traces ===
After positioning your board components, you will have to connect them using the KiCad router tool, seen circled in red below:
[[File:Trace Tool.png|center|thumb|KiCad Trace Tool]]
To use this tool, simply select the router tool icon on the right-hand side of the KiCad program window and select a components pin to begin.
The KiCad program will give you a visualization of which components you should trace towards.
Connect all traces and verify no components have been left unconnected.
=== Exporting Files for Manufacturing ===
There are multiple files required to get a PCB manufactured. Each manufacturer may have different requirements.
For this example, we will be using [https://jlcpcb.com/ JCLPCB's] PCB manufacturing services.
JCLPCB Requires:
* Gerber Files
* Drill Files
* Map Files
* BoM
* CPL
==== Exporting Gerber, Drill, and Map Files ====
For a detailed instruction, follow JLCPCB's KiCad export instructions [https://jlcpcb.com/help/article/362-how-to-generate-gerber-and-drill-files-in-kicad-7 here]
==== Exporting BoM (Bill of Materials) ====
atopile will automatically make the BoM for you, although you may need to reformat the header & information to meet the requirements of your manufacturer.
The JLCPCP format can be seen below:
[[File:JLCPCB BOM Format.png|center|500px|thumb|JLCPCB BOM Requirement]]
atopile's BoM file can be found in the build directory, typically called "default.csv" (<your-project>/build/default.net)
==== Exporting CPL (Component Placement List) ====
KiCad allows for quick and easy CPL exporting, although you will have to reformat your information to fit your manufacturers requirements
To export a CPL from your KiCad project:
# Go to File -> Fabrication Outputs -> Component Placement
# Select proper output directory
# Use CSV, Millimeters, and Separate files for front, back settings:
[[File:Pos settings.png|center|500px|thumb|Settings for CPL Generation in KiCad]]
# Click "Generate Position File"
# Fix output file to match your manufacturers requirements (JLCPCB example provided)
[[File:JLCPCB CPL.png|center|500px|thumb|JLCPCB CPL Example Format]]
== Ordering a PCB ==
Trusted low production PCB manufacturing companies:
* PCBWay
* JLCPCB
* SeeedStudio
Further PCB manufacturers and price comparisons for your specific project can be found [https://pcbshopper.com/ here]
== Related Articles ==
* [[atopile]]
[[Category: Hardware]]
[[Category: Guides]]
[[Category: Electronics]]
7488963f2cfc5d65254a65c14ec8d215bc4e5c48
User:Matt
2
171
735
713
2024-04-30T05:51:41Z
Matt
16
Add info
wikitext
text/x-wiki
Matt is the co-founder and hardware lead of [[K-Scale Labs]].
{{infobox person
| name = Matt Freed
| organization = [[K-Scale Labs]]
| title = Head of Hardware
}}
[[Category: K-Scale Employees]]
e6ebcfc5afd9e20344d161fcfad4289da93b380a
Jetson Flashing Walkthrough
0
175
736
724
2024-04-30T06:04:06Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
== Overview ==
Flashing a Jetson Orin Nano or AGX is a process that involves updating the software or firmware installed on these NVIDIA Jetson models. It's a common technique used by developers to either install a new system or restore an existing one. Please follow the below steps to successfully flash your device.
{{infobox company
| name = NVIDIA
| country = United States
| website_link = https://www.nvidia.com/
| robots = Jetson Orin Nano, Jetson AGX Xavier
}}
== Prerequisites ==
Before beginning the flashing process, ensure you have the following:
* NVIDIA's flashing software, such as the NVIDIA SDK Manager
* A compatible host machine (either Windows, Linux, or Mac)
* USB cable to connect the Jetson device to the host machine
* Jetson Orin Nano or AGX device
== Flashing Process ==
The following are the general steps to follow when flashing a Jetson Orin Nano or AGX:
# Download and install the appropriate NVIDIA SDK Manager on your host machine.
# Connect the Jetson device to your host machine using the USB cable.
# Open the NVIDIA SDK Manager and select your Jetson device (either Orin Nano or AGX).
# Follow the on-screen instructions to complete the flashing process.
Once the flashing process is complete, your Jetson device should now be updated with the latest firmware or software.
== Troubleshooting ==
If the flashing process fails or encounters any issues, refer to NVIDIA's official documentation or contact their support for assistance.
== References ==
<references />
[[Category:Robotics]] [[Category:Instruction]]
Functions used:
'''functions.search_web'''
Query: "NVIDIA Jetson Orin Nano or AGX flashing instructions"
```json
{
"query": "NVIDIA Jetson Orin Nano or AGX flashing instructions"
}
```
42dbd469b9d0697a50c07aaa8dbe653cb086affd
Category:Teleop
14
60
737
427
2024-04-30T12:01:41Z
136.62.52.52
0
wikitext
text/x-wiki
= Teleop =
Teleoperation is the art of controlling a robot from a distance (prefix "tele-" comes from the Ancient Greek word tēle, which means "far off, at a distance, far away, far from").
Robots specifically designed to be teleoperated by a human are known as a Proxy.
== Whole Body Teleop ==
In whole-body teleop, the user controls a hardware replica of the robot directly. The real robot directly mimcis the pose of the replica robot.
* https://mobile-aloha.github.io/
* https://www.youtube.com/watch?v=PFw5hwNVhbA
* https://x.com/haoshu_fang/status/1707434624413306955
* https://x.com/aditya_oberai/status/1762637503495033171
* https://what-is-proxy.com
* https://github.com/wuphilipp/gello_software
* https://www.ais.uni-bonn.de/papers/Humanoids_2023_Schwarz_Lenz_NimbRo_Avatar.pdf
* https://robot.neu.edu/project/avatar/
== VR Teleop ==
In VR teleop the user controls a simulated version of the robot. The robot and VR headset are usually on the same LAN and communication is done via gRPC, WebRTC, ZMQ.
* https://github.com/Improbable-AI/VisionProTeleop
* https://github.com/fazildgr8/VR_communication_mujoco200
* https://github.com/pollen-robotics/reachy2021-unity-package
* https://freetale.medium.com/unity-grpc-in-2023-98b739cb115
* https://x.com/AndreTI/status/1780665435999924343
* https://holo-dex.github.io/
* https://what-is-proxy.com
* https://github.com/ToruOwO/hato/tree/main?tab=readme-ov-file#collecting-demonstration-data
* https://github.com/rail-berkeley/oculus_reader
* https://github.com/OpenTeleVision/TeleVision/tree/main
== Controller Teleop ==
In controlelr teleop, the user uses joysticks, keyborads, and other controllers to control the robot. The buttons map to hardcoded behaviors on the robot.
* https://github.com/ros-teleop
* https://x.com/ShivinDass/status/1606156894271197184
* https://what-is-proxy.com
== Latency ==
* https://ennerf.github.io/2016/09/20/A-Practical-Look-at-Latency-in-Robotics-The-Importance-of-Metrics-and-Operating-Systems.html
* https://link.springer.com/article/10.1007/s10846-022-01749-3
* https://twitter.com/watneyrobotics/status/1769058250731999591
[[Category: Teleop]]
8a99f62058b0891abdb065970ba6a09facf9c0d6
738
737
2024-04-30T12:03:24Z
136.62.52.52
0
wikitext
text/x-wiki
= Teleop =
Teleoperation is the art of controlling a robot from a distance (prefix "tele-" comes from the Ancient Greek word tēle, which means "far off, at a distance, far away, far from").
Robots specifically designed to be teleoperated by a human are known as a Proxy.
== Whole Body Teleop ==
In whole-body teleop, the user controls a hardware replica of the robot directly. The real robot directly mimcis the pose of the replica robot.
* https://mobile-aloha.github.io/
* https://www.youtube.com/watch?v=PFw5hwNVhbA
* https://x.com/haoshu_fang/status/1707434624413306955
* https://x.com/aditya_oberai/status/1762637503495033171
* https://what-is-proxy.com
* https://github.com/wuphilipp/gello_software
* https://www.ais.uni-bonn.de/papers/Humanoids_2023_Schwarz_Lenz_NimbRo_Avatar.pdf
* https://www.ais.uni-bonn.de/papers/SORO_2023_Lenz_Schwarz_NimbRo_Avatar.pdf
* https://robot.neu.edu/project/avatar/
== VR Teleop ==
In VR teleop the user controls a simulated version of the robot. The robot and VR headset are usually on the same LAN and communication is done via gRPC, WebRTC, ZMQ.
* https://github.com/Improbable-AI/VisionProTeleop
* https://github.com/fazildgr8/VR_communication_mujoco200
* https://github.com/pollen-robotics/reachy2021-unity-package
* https://freetale.medium.com/unity-grpc-in-2023-98b739cb115
* https://x.com/AndreTI/status/1780665435999924343
* https://holo-dex.github.io/
* https://what-is-proxy.com
* https://github.com/ToruOwO/hato/tree/main?tab=readme-ov-file#collecting-demonstration-data
* https://github.com/rail-berkeley/oculus_reader
* https://github.com/OpenTeleVision/TeleVision/tree/main
== Controller Teleop ==
In controlelr teleop, the user uses joysticks, keyborads, and other controllers to control the robot. The buttons map to hardcoded behaviors on the robot.
* https://github.com/ros-teleop
* https://x.com/ShivinDass/status/1606156894271197184
* https://what-is-proxy.com
== Latency ==
* https://ennerf.github.io/2016/09/20/A-Practical-Look-at-Latency-in-Robotics-The-Importance-of-Metrics-and-Operating-Systems.html
* https://link.springer.com/article/10.1007/s10846-022-01749-3
* https://twitter.com/watneyrobotics/status/1769058250731999591
[[Category: Teleop]]
d1e7ed934a875a9e0657fa03cb506c36d03b111c
739
738
2024-04-30T16:52:28Z
108.211.178.220
0
wikitext
text/x-wiki
= Teleop =
Teleoperation is the art of controlling a robot from a distance (prefix "tele-" comes from the Ancient Greek word tēle, which means "far off, at a distance, far away, far from").
Robots specifically designed to be teleoperated by a human are known as a Proxy.
== Whole Body Teleop ==
In whole-body teleop, the user controls a hardware replica of the robot directly. The real robot directly mimcis the pose of the replica robot.
* https://mobile-aloha.github.io/
* https://www.youtube.com/watch?v=PFw5hwNVhbA
* https://x.com/haoshu_fang/status/1707434624413306955
* https://x.com/aditya_oberai/status/1762637503495033171
* https://what-is-proxy.com
* https://github.com/wuphilipp/gello_software
* https://www.ais.uni-bonn.de/papers/Humanoids_2023_Schwarz_Lenz_NimbRo_Avatar.pdf
* https://www.ais.uni-bonn.de/papers/SORO_2023_Lenz_Schwarz_NimbRo_Avatar.pdf
* https://robot.neu.edu/project/avatar/
== VR Teleop ==
In VR teleop the user controls a simulated version of the robot. The robot and VR headset are usually on the same LAN and communication is done via gRPC, WebRTC, ZMQ.
* https://github.com/Improbable-AI/VisionProTeleop
* https://github.com/fazildgr8/VR_communication_mujoco200
* https://github.com/pollen-robotics/reachy2021-unity-package
* https://freetale.medium.com/unity-grpc-in-2023-98b739cb115
* https://x.com/AndreTI/status/1780665435999924343
* https://holo-dex.github.io/
* https://what-is-proxy.com
* https://github.com/ToruOwO/hato/tree/main?tab=readme-ov-file#collecting-demonstration-data
* https://github.com/rail-berkeley/oculus_reader
* https://github.com/OpenTeleVision/TeleVision/tree/main
== Controller Teleop ==
In controller teleop, the user uses joysticks, keyboards, and other controllers to control the robot. The buttons map to hardcoded behaviors on the robot.
* https://github.com/ros-teleop
* https://x.com/ShivinDass/status/1606156894271197184
* https://what-is-proxy.com
== Latency ==
* https://ennerf.github.io/2016/09/20/A-Practical-Look-at-Latency-in-Robotics-The-Importance-of-Metrics-and-Operating-Systems.html
* https://link.springer.com/article/10.1007/s10846-022-01749-3
* https://twitter.com/watneyrobotics/status/1769058250731999591
[[Category: Teleop]]
ccb19761d033461f7ce2cb5ccf84e5007955bbcc
740
739
2024-04-30T16:54:10Z
108.211.178.220
0
wikitext
text/x-wiki
= Teleop =
Teleoperation is the art of controlling a robot from a distance (prefix "tele-" comes from the Ancient Greek word tēle, which means "far off, at a distance, far away, far from").
Robots specifically designed to be teleoperated by a human are known as a Proxy.
== Whole Body Teleop ==
In whole-body teleop, the user controls a hardware replica of the robot directly. The real robot directly mimcis the pose of the replica robot.
* https://mobile-aloha.github.io/
* https://www.youtube.com/watch?v=PFw5hwNVhbA
* https://x.com/haoshu_fang/status/1707434624413306955
* https://x.com/aditya_oberai/status/1762637503495033171
* https://what-is-proxy.com
* https://github.com/wuphilipp/gello_software
* https://www.ais.uni-bonn.de/papers/Humanoids_2023_Schwarz_Lenz_NimbRo_Avatar.pdf
* https://www.ais.uni-bonn.de/papers/SORO_2023_Lenz_Schwarz_NimbRo_Avatar.pdf
* https://robot.neu.edu/project/avatar/
== VR Teleop ==
In VR teleop the user controls a simulated version of the robot. The robot and VR headset are usually on the same LAN and communication is done via gRPC, WebRTC, ZMQ.
* https://github.com/Improbable-AI/VisionProTeleop
* https://github.com/fazildgr8/VR_communication_mujoco200
* https://github.com/pollen-robotics/reachy2021-unity-package
* https://freetale.medium.com/unity-grpc-in-2023-98b739cb115
* https://x.com/AndreTI/status/1780665435999924343
* https://holo-dex.github.io/
* https://what-is-proxy.com
* https://github.com/ToruOwO/hato/tree/main?tab=readme-ov-file#collecting-demonstration-data
* https://github.com/rail-berkeley/oculus_reader
* https://github.com/OpenTeleVision/TeleVision/tree/main
== Controller Teleop ==
In controller teleop, the user uses joysticks, keyboards, and other controllers to control the robot. The buttons map to hardcoded behaviors on the robot.
* https://github.com/ros-teleop
* https://x.com/ShivinDass/status/1606156894271197184
* https://what-is-proxy.com
== Latency ==
* https://ennerf.github.io/2016/09/20/A-Practical-Look-at-Latency-in-Robotics-The-Importance-of-Metrics-and-Operating-Systems.html
* https://link.springer.com/article/10.1007/s10846-022-01749-3
* https://twitter.com/watneyrobotics/status/1769058250731999591
26e6a7febfc97297d71cb02510cbf438de80035f
User:Ben
2
141
743
488
2024-04-30T17:47:16Z
Ben
2
Protected "[[User:Ben]]" ([Edit=Allow only administrators] (indefinite) [Move=Allow only administrators] (indefinite))
wikitext
text/x-wiki
[[File:Ben.jpg|right|200px|thumb]]
[https://ben.bolte.cc/ Ben] is the founder and CEO of [[K-Scale Labs]].
{{infobox person
| name = Ben Bolte
| organization = [[K-Scale Labs]]
| title = CEO
| website_link = https://ben.bolte.cc/
}}
[[Category: K-Scale Employees]]
f971d4004fa84319343a24548245ebcf11a88d16
K-Scale Arm
0
180
747
2024-04-30T18:47:31Z
Ben
2
Created page with "Project documentation for the open-source teleoperated arm from [[K-Scale Labs]]. [[Category:K-Scale]]"
wikitext
text/x-wiki
Project documentation for the open-source teleoperated arm from [[K-Scale Labs]].
[[Category:K-Scale]]
73b2044943cc63dd975102d143161221135a9154
K-Scale Onshape Library
0
181
748
2024-04-30T19:07:12Z
Ben
2
Created page with "The '''K-Scale Onshape Library'''<ref>https://github.com/kscalelabs/onshape</ref> is a tool developed by [[K-Scale Labs]] to convert Onshape files into simulation artifacts...."
wikitext
text/x-wiki
The '''K-Scale Onshape Library'''<ref>https://github.com/kscalelabs/onshape</ref> is a tool developed by [[K-Scale Labs]] to convert Onshape files into simulation artifacts.
=== Sample Script ===
Below is a sample script for converting a [[Stompy]] Onshape model to a URDF.
<syntaxhighlight lang="bash>
#!/bin/zsh
# ./sim/scripts/download_urdf.sh
# URL of the latest model.
url=https://cad.onshape.com/documents/71f793a23ab7562fb9dec82d/w/6160a4f44eb6113d3fa116cd/e/1a95e260677a2d2d5a3b1eb3
# Output directory.
output_dir=${MODEL_DIR}/robots/stompy
kol urdf ${url} \
--max-ang-velocity 31.4 \
--suffix-to-joint-effort \
dof_x4_h=1.5 \
dof_x4=1.5 \
dof_x6=3 \
dof_x8=6 \
dof_x10=12 \
knee_revolute=13.9 \
ankle_revolute=6 \
--output-dir ${output_dir} \
--disable-mimics \
--mesh-ext obj
</syntaxhighlight>
61f7782f7c1a403609f78c657bb6c8bee8b1cbc7
749
748
2024-04-30T19:07:28Z
Ben
2
wikitext
text/x-wiki
The '''K-Scale Onshape Library'''<ref>https://github.com/kscalelabs/onshape</ref> is a tool developed by [[K-Scale Labs]] to convert Onshape files into simulation artifacts.
=== Sample Script ===
Below is a sample script for converting a [[Stompy]] Onshape model to a URDF<ref>https://github.com/kscalelabs/sim/blob/master/sim/scripts/download_urdf.sh</ref>.
<syntaxhighlight lang="bash>
#!/bin/zsh
# ./sim/scripts/download_urdf.sh
# URL of the latest model.
url=https://cad.onshape.com/documents/71f793a23ab7562fb9dec82d/w/6160a4f44eb6113d3fa116cd/e/1a95e260677a2d2d5a3b1eb3
# Output directory.
output_dir=${MODEL_DIR}/robots/stompy
kol urdf ${url} \
--max-ang-velocity 31.4 \
--suffix-to-joint-effort \
dof_x4_h=1.5 \
dof_x4=1.5 \
dof_x6=3 \
dof_x8=6 \
dof_x10=12 \
knee_revolute=13.9 \
ankle_revolute=6 \
--output-dir ${output_dir} \
--disable-mimics \
--mesh-ext obj
</syntaxhighlight>
a43b10e0f29da1f3ff922079a7ca806cbffab3df
750
749
2024-04-30T19:07:33Z
Ben
2
wikitext
text/x-wiki
The '''K-Scale Onshape Library'''<ref>https://github.com/kscalelabs/onshape</ref> is a tool developed by [[K-Scale Labs]] to convert Onshape files into simulation artifacts.
=== Sample Script ===
Below is a sample script for converting a [[Stompy]] Onshape model to a URDF<ref>https://github.com/kscalelabs/sim/blob/master/sim/scripts/download_urdf.sh</ref>.
<syntaxhighlight lang="bash>
#!/bin/zsh
# URL of the latest model.
url=https://cad.onshape.com/documents/71f793a23ab7562fb9dec82d/w/6160a4f44eb6113d3fa116cd/e/1a95e260677a2d2d5a3b1eb3
# Output directory.
output_dir=${MODEL_DIR}/robots/stompy
kol urdf ${url} \
--max-ang-velocity 31.4 \
--suffix-to-joint-effort \
dof_x4_h=1.5 \
dof_x4=1.5 \
dof_x6=3 \
dof_x8=6 \
dof_x10=12 \
knee_revolute=13.9 \
ankle_revolute=6 \
--output-dir ${output_dir} \
--disable-mimics \
--mesh-ext obj
</syntaxhighlight>
7b5394eed1a93cbf30bae8ef4553ac6bd66df66f
Controller Area Network (CAN)
0
155
751
686
2024-04-30T22:45:53Z
108.211.178.220
0
wikitext
text/x-wiki
The '''Controller Area Network''' (CAN) represents an essential vehicle bus standard. It was explicitly engineered to facilitate communication between microcontrollers and devices without resorting to a central computer. The primary purpose of CAN is to enhance vehicle interconnectivity and foster swift data exchange between various systems within the vehicle.
=== MCP2515 ===
The '''MCP2515''' is an integrated circuit produced by Microchip Technology, constructed to function as a stand-alone CAN controller. It exhibits compatibility with an SPI (Serial Peripheral Interface) which makes it versatile in various applications. Although its primary use is in automotive industries, it is also used in a variety of other control applications.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515 bridges the connection between the CAN protocol and the SPI protocol by receiving CAN messages and translating them into SPI data, allowing the microcontroller to interpret the information. Similarly, it transforms SPI data into CAN messages for transmission.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515's operational details include its capacity to support CAN 2.0A and B, drawing attention to its alignment with established CAN standards and equipping it for both basic and extended frame format usage.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
==== MCP2515 Driver ====
== References ==
<references />
[[Category:Communication]]
81460e77f1384b55c059a1ce2b494022760b030e
768
751
2024-05-01T01:27:26Z
Budzianowski
19
wikitext
text/x-wiki
The '''Controller Area Network''' (CAN) represents an essential vehicle bus standard. It was explicitly engineered to facilitate communication between microcontrollers and devices without resorting to a central computer. The primary purpose of CAN is to enhance vehicle interconnectivity and foster swift data exchange between various systems within the vehicle.
=== MCP2515 ===
The '''MCP2515''' is an integrated circuit produced by Microchip Technology, constructed to function as a stand-alone CAN controller. It exhibits compatibility with an SPI (Serial Peripheral Interface) which makes it versatile in various applications. Although its primary use is in automotive industries, it is also used in a variety of other control applications.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515 bridges the connection between the CAN protocol and the SPI protocol by receiving CAN messages and translating them into SPI data, allowing the microcontroller to interpret the information. Similarly, it transforms SPI data into CAN messages for transmission.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515's operational details include its capacity to support CAN 2.0A and B, drawing attention to its alignment with established CAN standards and equipping it for both basic and extended frame format usage.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
=== Applications ===
The [https://python-can.readthedocs.io/en/stable/ python-can] library provides Controller Area Network support for Python, providing common abstractions to different hardware devices, and a suite of utilities for sending and receiving messages on a CAN bus.
There is an example of an installation on [[Jetson_Orin]] here.
==== Pi4 ====
Raspberry Pi offers an easy to deploy 2-Channel Isolated CAN Bus Expansion HAT which allows to quickly integrate it to the peripheral devices. See the [https://www.waveshare.com/wiki/2-CH_CAN_HAT tutorial] for more information
=== Arduino ===
Arduino has a good support of the MCPs with many implementations of the [https://github.com/Seeed-Studio/Seeed_Arduino_CAN drivers]
=== MCP2515 Driver ===
By default the CAN bus node is supposed to acknowledge every message on the bus weather or not that node is interested in the message. However, the interference on the network can drop some bits during the communication. In the standard mode, the node would not only continuously try to re-send the unacknowledged messages, but also after a short period it would start sending error frames and then eventually go to bus-off mode and stop. This causes sever issues when the CAN network works with multiple motors.
The controller has a [http://ww1.microchip.com/downloads/en/DeviceDoc/MCP2515-Stand-Alone-CAN-Controller-with-SPI-20001801J.pdf one-shot] setup that requires changes in the driver.
== References ==
<references />
[[Category:Communication]]
f4701a3d65b6ae701b5e1a2f6d155536c2b6abb9
769
768
2024-05-01T01:28:05Z
Budzianowski
19
wikitext
text/x-wiki
The '''Controller Area Network''' (CAN) represents an essential vehicle bus standard. It was explicitly engineered to facilitate communication between microcontrollers and devices without resorting to a central computer. The primary purpose of CAN is to enhance vehicle interconnectivity and foster swift data exchange between various systems within the vehicle.
== MCP2515 ==
The '''MCP2515''' is an integrated circuit produced by Microchip Technology, constructed to function as a stand-alone CAN controller. It exhibits compatibility with an SPI (Serial Peripheral Interface) which makes it versatile in various applications. Although its primary use is in automotive industries, it is also used in a variety of other control applications.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515 bridges the connection between the CAN protocol and the SPI protocol by receiving CAN messages and translating them into SPI data, allowing the microcontroller to interpret the information. Similarly, it transforms SPI data into CAN messages for transmission.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515's operational details include its capacity to support CAN 2.0A and B, drawing attention to its alignment with established CAN standards and equipping it for both basic and extended frame format usage.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
== Applications ==
The [https://python-can.readthedocs.io/en/stable/ python-can] library provides Controller Area Network support for Python, providing common abstractions to different hardware devices, and a suite of utilities for sending and receiving messages on a CAN bus.
There is an example of an installation on [[Jetson_Orin]] here.
=== Pi4 ===
Raspberry Pi offers an easy to deploy 2-Channel Isolated CAN Bus Expansion HAT which allows to quickly integrate it to the peripheral devices. See the [https://www.waveshare.com/wiki/2-CH_CAN_HAT tutorial] for more information
=== Arduino ===
Arduino has a good support of the MCPs with many implementations of the [https://github.com/Seeed-Studio/Seeed_Arduino_CAN drivers]
=== MCP2515 Driver ===
By default the CAN bus node is supposed to acknowledge every message on the bus weather or not that node is interested in the message. However, the interference on the network can drop some bits during the communication. In the standard mode, the node would not only continuously try to re-send the unacknowledged messages, but also after a short period it would start sending error frames and then eventually go to bus-off mode and stop. This causes sever issues when the CAN network works with multiple motors.
The controller has a [http://ww1.microchip.com/downloads/en/DeviceDoc/MCP2515-Stand-Alone-CAN-Controller-with-SPI-20001801J.pdf one-shot] setup that requires changes in the driver.
== References ==
<references />
[[Category:Communication]]
ba36d170b88c7639fdc62ee3e3294b9d7389da4b
770
769
2024-05-01T01:28:28Z
Budzianowski
19
wikitext
text/x-wiki
The '''Controller Area Network''' (CAN) represents an essential vehicle bus standard. It was explicitly engineered to facilitate communication between microcontrollers and devices without resorting to a central computer. The primary purpose of CAN is to enhance vehicle interconnectivity and foster swift data exchange between various systems within the vehicle.
== MCP2515 ==
The '''MCP2515''' is an integrated circuit produced by Microchip Technology, constructed to function as a stand-alone CAN controller. It exhibits compatibility with an SPI (Serial Peripheral Interface) which makes it versatile in various applications. Although its primary use is in automotive industries, it is also used in a variety of other control applications.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515 bridges the connection between the CAN protocol and the SPI protocol by receiving CAN messages and translating them into SPI data, allowing the microcontroller to interpret the information. Similarly, it transforms SPI data into CAN messages for transmission.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515's operational details include its capacity to support CAN 2.0A and B, drawing attention to its alignment with established CAN standards and equipping it for both basic and extended frame format usage.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
== Applications ==
The [https://python-can.readthedocs.io/en/stable/ python-can] library provides Controller Area Network support for Python, providing common abstractions to different hardware devices, and a suite of utilities for sending and receiving messages on a CAN bus.
There is an example of an installation on [[Jetson_Orin]] here.
=== Pi4 ===
Raspberry Pi offers an easy to deploy 2-Channel Isolated CAN Bus Expansion HAT which allows to quickly integrate it to the peripheral devices. See the [https://www.waveshare.com/wiki/2-CH_CAN_HAT tutorial] for more information
=== Arduino ===
Arduino has a good support of the MCPs with many implementations of the [https://github.com/Seeed-Studio/Seeed_Arduino_CAN drivers]
=== MCP2515 Driver ===
By default the CAN bus node is supposed to acknowledge every message on the bus weather or not that node is interested in the message. However, the interference on the network can drop some bits during the communication. In the standard mode, the node would not only continuously try to re-send the unacknowledged messages, but also after a short period it would start sending error frames and then eventually go to bus-off mode and stop. This causes sever issues when the CAN network works with multiple motors.
The controller has a [http://ww1.microchip.com/downloads/en/DeviceDoc/MCP2515-Stand-Alone-CAN-Controller-with-SPI-20001801J.pdf one-shot] setup that requires changes in the driver.
== References ==
<references />
[[Category:Communication]]
b54b57ad3e3f520c864439646564f9579fb1435e
771
770
2024-05-01T01:28:52Z
Budzianowski
19
wikitext
text/x-wiki
The '''Controller Area Network''' (CAN) represents an essential vehicle bus standard. It was explicitly engineered to facilitate communication between microcontrollers and devices without resorting to a central computer. The primary purpose of CAN is to enhance vehicle interconnectivity and foster swift data exchange between various systems within the vehicle.
== MCP2515 ==
The '''MCP2515''' is an integrated circuit produced by Microchip Technology, constructed to function as a stand-alone CAN controller. It exhibits compatibility with an SPI (Serial Peripheral Interface) which makes it versatile in various applications. Although its primary use is in automotive industries, it is also used in a variety of other control applications.<ref>MCP2515</ref> Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].
The MCP2515 bridges the connection between the CAN protocol and the SPI protocol by receiving CAN messages and translating them into SPI data, allowing the microcontroller to interpret the information. Similarly, it transforms SPI data into CAN messages for transmission.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515's operational details include its capacity to support CAN 2.0A and B, drawing attention to its alignment with established CAN standards and equipping it for both basic and extended frame format usage.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
== Applications ==
The [https://python-can.readthedocs.io/en/stable/ python-can] library provides Controller Area Network support for Python, providing common abstractions to different hardware devices, and a suite of utilities for sending and receiving messages on a CAN bus.
There is an example of an installation on [[Jetson_Orin]] here.
=== Pi4 ===
Raspberry Pi offers an easy to deploy 2-Channel Isolated CAN Bus Expansion HAT which allows to quickly integrate it to the peripheral devices. See the [https://www.waveshare.com/wiki/2-CH_CAN_HAT tutorial] for more information
=== Arduino ===
Arduino has a good support of the MCPs with many implementations of the [https://github.com/Seeed-Studio/Seeed_Arduino_CAN drivers]
=== MCP2515 Driver ===
By default the CAN bus node is supposed to acknowledge every message on the bus weather or not that node is interested in the message. However, the interference on the network can drop some bits during the communication. In the standard mode, the node would not only continuously try to re-send the unacknowledged messages, but also after a short period it would start sending error frames and then eventually go to bus-off mode and stop. This causes sever issues when the CAN network works with multiple motors.
The controller has a [http://ww1.microchip.com/downloads/en/DeviceDoc/MCP2515-Stand-Alone-CAN-Controller-with-SPI-20001801J.pdf one-shot] setup that requires changes in the driver.
== References ==
<references />
[[Category:Communication]]
e69e7301f068d5801538a432910f0b3591285686
772
771
2024-05-01T01:29:06Z
Budzianowski
19
wikitext
text/x-wiki
The '''Controller Area Network''' (CAN) represents an essential vehicle bus standard. It was explicitly engineered to facilitate communication between microcontrollers and devices without resorting to a central computer. The primary purpose of CAN is to enhance vehicle interconnectivity and foster swift data exchange between various systems within the vehicle.
== MCP2515 ==
The '''MCP2515''' is an integrated circuit produced by Microchip Technology, constructed to function as a stand-alone CAN controller. It exhibits compatibility with an SPI (Serial Peripheral Interface) which makes it versatile in various applications. Although its primary use is in automotive industries, it is also used in a variety of other control applications.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515 bridges the connection between the CAN protocol and the SPI protocol by receiving CAN messages and translating them into SPI data, allowing the microcontroller to interpret the information. Similarly, it transforms SPI data into CAN messages for transmission.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515's operational details include its capacity to support CAN 2.0A and B, drawing attention to its alignment with established CAN standards and equipping it for both basic and extended frame format usage.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
== Applications ==
The [https://python-can.readthedocs.io/en/stable/ python-can] library provides Controller Area Network support for Python, providing common abstractions to different hardware devices, and a suite of utilities for sending and receiving messages on a CAN bus.
There is an example of an installation on [[Jetson_Orin]] here.
=== Pi4 ===
Raspberry Pi offers an easy to deploy 2-Channel Isolated CAN Bus Expansion HAT which allows to quickly integrate it to the peripheral devices. See the [https://www.waveshare.com/wiki/2-CH_CAN_HAT tutorial] for more information
=== Arduino ===
Arduino has a good support of the MCPs with many implementations of the [https://github.com/Seeed-Studio/Seeed_Arduino_CAN drivers]
=== MCP2515 Driver ===
By default the CAN bus node is supposed to acknowledge every message on the bus weather or not that node is interested in the message. However, the interference on the network can drop some bits during the communication. In the standard mode, the node would not only continuously try to re-send the unacknowledged messages, but also after a short period it would start sending error frames and then eventually go to bus-off mode and stop. This causes sever issues when the CAN network works with multiple motors.
The controller has a [http://ww1.microchip.com/downloads/en/DeviceDoc/MCP2515-Stand-Alone-CAN-Controller-with-SPI-20001801J.pdf one-shot] setup that requires changes in the driver.
== References ==
<references />
[[Category:Communication]]
b54b57ad3e3f520c864439646564f9579fb1435e
Walking Stompy guide
0
182
752
2024-05-01T00:06:01Z
Budzianowski
19
Created page with "A guide to simulate Stompy walking. [[Category: Guides]] [[Category: Software]]"
wikitext
text/x-wiki
A guide to simulate Stompy walking.
[[Category: Guides]]
[[Category: Software]]
71b1225810ae841c4e9ed317963cbc926cec27bf
755
752
2024-05-01T00:06:43Z
Budzianowski
19
Budzianowski moved page [[Simulation guide]] to [[Walking Stompy guide]]
wikitext
text/x-wiki
A guide to simulate Stompy walking.
[[Category: Guides]]
[[Category: Software]]
71b1225810ae841c4e9ed317963cbc926cec27bf
Template:Infobox paper
10
183
753
2024-05-01T00:06:28Z
Ben
2
Created page with "{{infobox | name = {{{name}}} | key1 = Name | value1 = {{{name}}} | key2 = Full Name | value2 = {{{full_name}}} | key3 = Arxiv | value3 = {{#if: {{{arxiv_link|}}} | [{{{arxiv_..."
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Full Name
| value2 = {{{full_name}}}
| key3 = Arxiv
| value3 = {{#if: {{{arxiv_link|}}} | [{{{arxiv_link}}} Link] }}
| key4 = Project Page
| value4 = {{#if: {{{project_link|}}} | [{{{project_link}}} Website] }}
| key5 = Twitter
| value5 = {{#if: {{{twitter_link|}}} | [{{{twitter_link}}} Twitter] }}
| key6 = Publication Date
| value6 = {{{date|}}}
| key7 = Authors
| value7 = {{{authors|}}}
}}
d535150cf5288a390695971df41697349e1774bc
754
753
2024-05-01T00:06:32Z
Ben
2
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Full Name
| value2 = {{{full_name|}}}
| key3 = Arxiv
| value3 = {{#if: {{{arxiv_link|}}} | [{{{arxiv_link}}} Link] }}
| key4 = Project Page
| value4 = {{#if: {{{project_link|}}} | [{{{project_link}}} Website] }}
| key5 = Twitter
| value5 = {{#if: {{{twitter_link|}}} | [{{{twitter_link}}} Twitter] }}
| key6 = Publication Date
| value6 = {{{date|}}}
| key7 = Authors
| value7 = {{{authors|}}}
}}
61ecf4fcda3f2f391640a2d7ca47a3ac0742fec8
Simulation guide
0
184
756
2024-05-01T00:06:43Z
Budzianowski
19
Budzianowski moved page [[Simulation guide]] to [[Walking Stompy guide]]
wikitext
text/x-wiki
#REDIRECT [[Walking Stompy guide]]
843539494d81f842a9d35875aa1afdf1c8aa05e8
Applications
0
66
757
289
2024-05-01T00:07:39Z
Budzianowski
19
wikitext
text/x-wiki
=Applications List=
A non-comprehensive list of training frameworks is listed below.
===[https://github.com/leggedrobotics/legged_gym Legged Gym]===
Isaac Gym Environments for Legged Robots.
===[https://github.com/chengxuxin/extreme-parkour Extreme Parkour]===
Extreme Parkour with AMP Legged Robots.
===[https://github.com/Alescontrela/AMP_for_hardware AMP for hardware]===
Adversarial Motion Priors Make Good Substitutes for Complex Reward Functions
===[https://github.com/ZhengyiLuo/PHC PHC]===
Official Implementation of the ICCV 2023 paper: Perpetual Humanoid Control for Real-time Simulated Avatars.
===[https://github.com/roboterax/humanoid-gym Humanoid Gym]===
Training setup for walking with [[Xbot-L]].
===[https://github.com/kscalelabs/sim KScale Sim]===
Training setup for getting up and walking with [[Stompy]].
[[Category: Software]]
2a756c2265c456337d18f09f69c22c02f74ed71b
Isaac Sim
0
18
758
342
2024-05-01T00:08:04Z
Budzianowski
19
wikitext
text/x-wiki
Isaac Sim is a simulator from NVIDIA connect with the Omniverse platform. The core physics engine underlying Isaac Sim is PhysX
=== Doing Simple Operations ===
'''Start Isaac Sim'''
* Open Omniverse Launcher
* Navigate to the Library
* Under “Apps” click “Isaac Sim”
* Click “Launch”
** There are multiple options for launching. Choose the normal one to show the GUI or headless if streaming.
* Choose <code>File > Open...</code> and select the <code>.usd</code> model corresponding to the robot you want to simulate.
'''Connecting streaming client'''
* Start Isaac Sim in Headless (Native) mode
* Open Omniverse Streaming Client
* Connect to the server
[[Category: Software]]
[[Category: Simulators]]
ee307ceaaa4f5a2d87fa7daecc75b7a1a453db5c
Reinforcement Learning
0
34
759
628
2024-05-01T00:08:25Z
Budzianowski
19
wikitext
text/x-wiki
== Training algorithms ==
* [https://en.wikipedia.org/wiki/Advantage_Actor_Critic A2C]
* [https://en.wikipedia.org/wiki/Proximal_policy_optimization PPO]
* [https://spinningup.openai.com/en/latest/algorithms/sac.html SAC]
== Resources ==
* [https://mandi-zhao.gitbook.io/deeprl-notes Mandy Zhao's Reinforcement Learning Notes]
[[Category: Software]]
81ff653e50db6b0b9d2db8edbb11a9b7c94941b0
Universal Manipulation Interface
0
185
760
2024-05-01T00:09:18Z
Ben
2
Created page with "'''Universal Manipulation Interface''' is a data collection and policy learning framework that allows direct skill transfer from in-the-wild human demonstrations to deployable..."
wikitext
text/x-wiki
'''Universal Manipulation Interface''' is a data collection and policy learning framework that allows direct skill transfer from in-the-wild human demonstrations to deployable robot policies.
{{infobox
| name = UMI
| full_name = Universal Manipulation Interface: In-The-Wild Robot Teaching Without In-The-Wild Robots
| arxiv_link = https://arxiv.org/abs/2402.10329
| project_page = https://umi-gripper.github.io/
| twitter_link = https://twitter.com/chichengcc/status/1758539728444629158
| date = February 2024
| authors = Cheng Chi, Zhenjia xu, Chuer Pan, Eric Cousineau, Benjamin Burchfiel, Siyuan Feng, Russ Tedrake, Shuran Song
}}
d4365aba0d074505f2eca573af5dbe87b4b4bd9a
761
760
2024-05-01T00:09:25Z
Ben
2
wikitext
text/x-wiki
'''Universal Manipulation Interface''' is a data collection and policy learning framework that allows direct skill transfer from in-the-wild human demonstrations to deployable robot policies.
{{infobox paper
| name = UMI
| full_name = Universal Manipulation Interface: In-The-Wild Robot Teaching Without In-The-Wild Robots
| arxiv_link = https://arxiv.org/abs/2402.10329
| project_page = https://umi-gripper.github.io/
| twitter_link = https://twitter.com/chichengcc/status/1758539728444629158
| date = February 2024
| authors = Cheng Chi, Zhenjia xu, Chuer Pan, Eric Cousineau, Benjamin Burchfiel, Siyuan Feng, Russ Tedrake, Shuran Song
}}
511f4975469f1a0914e14a143bff5201b5913bc1
762
761
2024-05-01T00:17:18Z
Ben
2
wikitext
text/x-wiki
'''Universal Manipulation Interface''' is a data collection and policy learning framework that allows direct skill transfer from in-the-wild human demonstrations to deployable robot policies.
{{infobox paper
| name = UMI
| full_name = Universal Manipulation Interface: In-The-Wild Robot Teaching Without In-The-Wild Robots
| arxiv_link = https://arxiv.org/abs/2402.10329
| project_page = https://umi-gripper.github.io/
| twitter_link = https://twitter.com/chichengcc/status/1758539728444629158
| date = February 2024
| authors = Cheng Chi, Zhenjia Xu, Chuer Pan, Eric Cousineau, Benjamin Burchfiel, Siyuan Feng, Russ Tedrake, Shuran Song
}}
The UMI paper was novel for several reasons:
# It completely avoids robot teleoperation and the associated latency. This lets the robot do things like reliably tossing balls.
# It provides a low-cost, scalable way to collect lots of data in the wild
fbf7abcba7902a7a29661d110d9e323e9465e125
763
762
2024-05-01T00:18:03Z
Ben
2
wikitext
text/x-wiki
'''Universal Manipulation Interface''' is a data collection and policy learning framework that allows direct skill transfer from in-the-wild human demonstrations to deployable robot policies.
{{infobox paper
| name = UMI
| full_name = Universal Manipulation Interface: In-The-Wild Robot Teaching Without In-The-Wild Robots
| arxiv_link = https://arxiv.org/abs/2402.10329
| project_page = https://umi-gripper.github.io/
| twitter_link = https://twitter.com/chichengcc/status/1758539728444629158
| date = February 2024
| authors = Cheng Chi, Zhenjia Xu, Chuer Pan, Eric Cousineau, Benjamin Burchfiel, Siyuan Feng, Russ Tedrake, Shuran Song
}}
The UMI paper was novel for several reasons:
# It completely avoids robot teleoperation and the associated latency. This lets the robot do things like reliably tossing balls.
# It provides a low-cost, scalable way to collect lots of data in the wild
[[Category: Papers]]
695e305593ccc6074b379f3f3dde624d0c47186e
Category:Papers
14
186
764
2024-05-01T00:18:25Z
Ben
2
Created page with "This category is used for academic papers related to humanoid robots."
wikitext
text/x-wiki
This category is used for academic papers related to humanoid robots.
c97260db0a996c1061150360992d70590f5191b5
Stompy PCB Designs
0
187
765
2024-05-01T00:55:23Z
Ben
2
Created page with "This document describes the PCBs that we use in Stompy == Head == * Audio * Video * Ethernet (to communicate with the body and for debugging) == Body == * CAN * Power regu..."
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
* Video
* Ethernet (to communicate with the body and for debugging)
== Body ==
* CAN
* Power regulation
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
* Ethernet (to communicate with the head and for debugging)
4d1786526ad8b41f4187c652e6b19b9cb73f6a70
766
765
2024-05-01T01:14:20Z
Ben
2
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
* Video
* Ethernet (to communicate with the body and for debugging)
== Body ==
* CAN
** MCP2515
**
* Power regulation
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
* Ethernet (to communicate with the head and for debugging)
[[Category: Electronics]]
31058132f05946803c66c65995971ef6de835970
767
766
2024-05-01T01:17:30Z
Ben
2
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
* Video
* Ethernet (to communicate with the body and for debugging)
== Body ==
* CAN
** MCP2515: CAN Controller, SPI compatible
** ATA6561: CAN Tranciever
* Power regulation
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
* Ethernet (to communicate with the head and for debugging)
[[Category: Electronics]]
8b877c02270d388a507df0e516a9bab3fe0bd453
774
767
2024-05-01T01:36:44Z
Ben
2
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
* Video
* Ethernet (to communicate with the body and for debugging)
== Body ==
* CAN
** MCP2515: CAN Controller, SPI compatible
** ATA6561: CAN Tranciever
* Power regulation
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
* Ethernet (to communicate with the head and for debugging)
[[File:Raspberry Pi CAN hat.jpg|thumb|left]]
[[Category: Electronics]]
bbf1bcc7fd0caf1bcb19f2da5160367b91533679
775
774
2024-05-01T01:36:57Z
Ben
2
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
* Video
* Ethernet (to communicate with the body and for debugging)
== Body ==
* CAN
** MCP2515: CAN Controller, SPI compatible
** ATA6561: CAN Tranciever
* Power regulation
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
* Ethernet (to communicate with the head and for debugging)
[[File:Raspberry Pi CAN hat.jpg|thumb|left|Raspberry Pi CAN Hat]]
[[Category: Electronics]]
ed4b6b6df1a83b2dbace156e1b9f1d404f888399
776
775
2024-05-01T01:37:51Z
Ben
2
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
* Video
* Ethernet (to communicate with the body and for debugging)
== Body ==
* CAN
** MCP2515: CAN Controller, SPI compatible
** ATA6561: CAN Tranciever
* Power regulation
** AMS 1117
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
* Ethernet (to communicate with the head and for debugging)
[[File:Raspberry Pi CAN hat.jpg|thumb|left|Raspberry Pi CAN Hat]]
[[Category: Electronics]]
231263330d539892c730ec630ff857f81e237854
777
776
2024-05-01T01:41:57Z
Ben
2
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
* Video
* Ethernet (to communicate with the body and for debugging)
== Body ==
* CAN
** MCP2515: CAN Controller, SPI compatible
** ATA6561: CAN Tranciever
* Power regulation
** AMS 1117
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
*** Gyroscope and Accelerometer: LSM6DSL, compatible with SPI and I2C
*** Magnetometer: LIS3MDL, compatible with SPI and I2C
* Ethernet (to communicate with the head and for debugging)
[[File:Raspberry Pi CAN hat.jpg|thumb|left|Raspberry Pi CAN Hat]]
[[Category: Electronics]]
88b47a3f97de190a988bd72bf16ede9ece7bcfd5
778
777
2024-05-01T01:44:38Z
Ben
2
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
* Video
* Ethernet (to communicate with the body and for debugging)
== Body ==
* CAN
** MCP2515: CAN Controller, SPI compatible
** ATA6561: CAN Tranceiver
** MCP2551: CAN Tranceiver
* Power regulation
** AMS 1117
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
*** Gyroscope and Accelerometer: LSM6DSL, compatible with SPI and I2C
*** Magnetometer: LIS3MDL, compatible with SPI and I2C
* Ethernet (to communicate with the head and for debugging)
[[File:Raspberry Pi CAN hat.jpg|thumb|left|Raspberry Pi CAN Hat]]
[[Category: Electronics]]
0bf764d8b7c6cf1d4c844f0b6926a63ce7f4c587
779
778
2024-05-01T01:45:24Z
Ben
2
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
* Video
* Ethernet (to communicate with the body and for debugging)
== Body ==
* CAN
** MCP2515: CAN Controller, SPI compatible
** ATA6561: CAN Tranceiver
** MCP2551: CAN Tranceiver<ref>https://www.seeedstudio.com/I2C-CAN-Bus-Module-p-5054.html</ref>
* Power regulation
** AMS 1117
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
*** Gyroscope and Accelerometer: LSM6DSL, compatible with SPI and I2C
*** Magnetometer: LIS3MDL, compatible with SPI and I2C
* Ethernet (to communicate with the head and for debugging)
[[File:Raspberry Pi CAN hat.jpg|thumb|left|Raspberry Pi CAN Hat]]
=== References ===
<references/>
[[Category: Electronics]]
bba0a559b20667c1f09e74a351b040040485efa8
780
779
2024-05-01T01:45:33Z
Ben
2
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
[[File:Raspberry Pi CAN hat.jpg|thumb|left|Raspberry Pi CAN Hat]]
== Head ==
* Audio
* Video
* Ethernet (to communicate with the body and for debugging)
== Body ==
* CAN
** MCP2515: CAN Controller, SPI compatible
** ATA6561: CAN Tranceiver
** MCP2551: CAN Tranceiver<ref>https://www.seeedstudio.com/I2C-CAN-Bus-Module-p-5054.html</ref>
* Power regulation
** AMS 1117
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
*** Gyroscope and Accelerometer: LSM6DSL, compatible with SPI and I2C
*** Magnetometer: LIS3MDL, compatible with SPI and I2C
* Ethernet (to communicate with the head and for debugging)
=== References ===
<references/>
[[Category: Electronics]]
85275db397ca95a8bf9ce2ac49b10e3dd920fee9
781
780
2024-05-01T01:46:15Z
Ben
2
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
[[File:Raspberry Pi CAN hat.jpg|thumb|right|Raspberry Pi CAN Hat]]
== Head ==
* Audio
* Video
* Ethernet (to communicate with the body and for debugging)
== Body ==
* CAN
** MCP2515: CAN Controller, SPI compatible
** ATA6561: CAN Tranceiver
** MCP2551: CAN Tranceiver<ref>https://www.seeedstudio.com/I2C-CAN-Bus-Module-p-5054.html</ref>
* Power regulation
** AMS 1117
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
*** Gyroscope and Accelerometer: LSM6DSL, compatible with SPI and I2C
*** Magnetometer: LIS3MDL, compatible with SPI and I2C
* Ethernet (to communicate with the head and for debugging)
=== References ===
<references/>
[[Category: Electronics]]
a70a4558f2f9fcae69a5ed63d6f62b270604c707
File:Raspberry Pi CAN hat.jpg
6
188
773
2024-05-01T01:36:34Z
Ben
2
wikitext
text/x-wiki
Raspberry Pi CAN hat
363b61b40b643be7cac4b1095a12ae40d85843fa
Stompy PCB Designs
0
187
782
781
2024-05-01T01:46:23Z
Ben
2
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
* Video
* Ethernet (to communicate with the body and for debugging)
== Body ==
* CAN
** MCP2515: CAN Controller, SPI compatible
** ATA6561: CAN Tranceiver
** MCP2551: CAN Tranceiver<ref>https://www.seeedstudio.com/I2C-CAN-Bus-Module-p-5054.html</ref>
* Power regulation
** AMS 1117
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
*** Gyroscope and Accelerometer: LSM6DSL, compatible with SPI and I2C
*** Magnetometer: LIS3MDL, compatible with SPI and I2C
* Ethernet (to communicate with the head and for debugging)
[[File:Raspberry Pi CAN hat.jpg|thumb|right|Raspberry Pi CAN Hat]]
=== References ===
<references/>
[[Category: Electronics]]
7030c17752a251387c4cc6bef0d98e2ae180e5f5
783
782
2024-05-01T01:46:29Z
Ben
2
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
* Video
* Ethernet (to communicate with the body and for debugging)
== Body ==
[[File:Raspberry Pi CAN hat.jpg|thumb|right|Raspberry Pi CAN Hat]]
* CAN
** MCP2515: CAN Controller, SPI compatible
** ATA6561: CAN Tranceiver
** MCP2551: CAN Tranceiver<ref>https://www.seeedstudio.com/I2C-CAN-Bus-Module-p-5054.html</ref>
* Power regulation
** AMS 1117
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
*** Gyroscope and Accelerometer: LSM6DSL, compatible with SPI and I2C
*** Magnetometer: LIS3MDL, compatible with SPI and I2C
* Ethernet (to communicate with the head and for debugging)
=== References ===
<references/>
[[Category: Electronics]]
3cf932315c7ea228c9660a837cf1999a213fa1b9
784
783
2024-05-01T03:48:20Z
Matt
16
Add CAN Transceivers w links
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
* Video
* Ethernet (to communicate with the body and for debugging)
== Body ==
[[File:Raspberry Pi CAN hat.jpg|thumb|right|Raspberry Pi CAN Hat]]
* CAN
** MCP2515: CAN Controller, SPI compatible
** ATA6561: CAN Tranceiver
** MCP2551: CAN Tranceiver<ref>https://www.seeedstudio.com/I2C-CAN-Bus-Module-p-5054.html</ref>
** TLE6250G: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN_Infineon_TLE6250G_TLE6250G_C111030.html </ref>
** TJA1051: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_NXP-Semicon-TJA1051TK-3-118_C124020.html </ref>
** SN65HVD230: CAN transceiver (3V~3.6V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_Texas-Instruments-SN65HVD230QDR_C16468.html</ref>
* Power regulation
** AMS 1117
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
*** Gyroscope and Accelerometer: LSM6DSL, compatible with SPI and I2C
*** Magnetometer: LIS3MDL, compatible with SPI and I2C
* Ethernet (to communicate with the head and for debugging)
=== References ===
<references/>
[[Category: Electronics]]
de8661e49e7dd77f8f42f05b888835c0e6f64eeb
785
784
2024-05-01T03:51:45Z
Matt
16
Add voltages and links
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
* Video
* Ethernet (to communicate with the body and for debugging)
== Body ==
[[File:Raspberry Pi CAN hat.jpg|thumb|right|Raspberry Pi CAN Hat]]
* CAN
** MCP2515: CAN Controller, SPI compatible
** ATA6561: CAN Transceiver (4.5V~5.5V)<ref> https://www.lcsc.com/product-detail/CAN-ICs_Microchip-Tech-ATA6561-GAQW-N_C616016.html</ref>
** MCP2551: CAN Transceiver (4.5V~5.5V)<ref>https://www.seeedstudio.com/I2C-CAN-Bus-Module-p-5054.html</ref><ref>https://www.lcsc.com/product-detail/CAN-ICs_Microchip-Tech-MCP2551-I-SN_C7376.html</ref>
** TLE6250G: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN_Infineon_TLE6250G_TLE6250G_C111030.html </ref>
** TJA1051: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_NXP-Semicon-TJA1051TK-3-118_C124020.html </ref>
** SN65HVD230: CAN transceiver (3V~3.6V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_Texas-Instruments-SN65HVD230QDR_C16468.html</ref>
* Power regulation
** AMS 1117
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
*** Gyroscope and Accelerometer: LSM6DSL, compatible with SPI and I2C
*** Magnetometer: LIS3MDL, compatible with SPI and I2C
* Ethernet (to communicate with the head and for debugging)
=== References ===
<references/>
[[Category: Electronics]]
64c98648b814178a8cac86b711412d3f05535a94
786
785
2024-05-01T03:57:44Z
Matt
16
Add MCP2515 link and voltage ranges
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
* Video
* Ethernet (to communicate with the body and for debugging)
== Body ==
[[File:Raspberry Pi CAN hat.jpg|thumb|right|Raspberry Pi CAN Hat]]
* CAN
** MCP2515: CAN Controller, SPI compatible (2.7V~5.5V) <ref>https://www.lcsc.com/product-detail/CAN_MICROCHIP_MCP2515-I-SO_MCP2515-I-SO_C12368.html</ref>
** ATA6561: CAN Transceiver (4.5V~5.5V)<ref> https://www.lcsc.com/product-detail/CAN-ICs_Microchip-Tech-ATA6561-GAQW-N_C616016.html</ref>
** MCP2551: CAN Transceiver (4.5V~5.5V)<ref>https://www.seeedstudio.com/I2C-CAN-Bus-Module-p-5054.html</ref><ref>https://www.lcsc.com/product-detail/CAN-ICs_Microchip-Tech-MCP2551-I-SN_C7376.html</ref>
** TLE6250G: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN_Infineon_TLE6250G_TLE6250G_C111030.html </ref>
** TJA1051: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_NXP-Semicon-TJA1051TK-3-118_C124020.html </ref>
** SN65HVD230: CAN transceiver (3V~3.6V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_Texas-Instruments-SN65HVD230QDR_C16468.html</ref>
* Power regulation
** AMS 1117
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
*** Gyroscope and Accelerometer: LSM6DSL, compatible with SPI and I2C
*** Magnetometer: LIS3MDL, compatible with SPI and I2C
* Ethernet (to communicate with the head and for debugging)
=== References ===
<references/>
[[Category: Electronics]]
bbd8058a35f1acb349286fef36763fd8d9378043
787
786
2024-05-01T03:59:06Z
Matt
16
Add compute module sections
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Compute Module
* Audio
* Video
* Ethernet (to communicate with the body and for debugging)
== Body ==
[[File:Raspberry Pi CAN hat.jpg|thumb|right|Raspberry Pi CAN Hat]]
* Compute Module
* CAN
** MCP2515: CAN Controller, SPI compatible (2.7V~5.5V) <ref>https://www.lcsc.com/product-detail/CAN_MICROCHIP_MCP2515-I-SO_MCP2515-I-SO_C12368.html</ref>
** ATA6561: CAN Transceiver (4.5V~5.5V)<ref> https://www.lcsc.com/product-detail/CAN-ICs_Microchip-Tech-ATA6561-GAQW-N_C616016.html</ref>
** MCP2551: CAN Transceiver (4.5V~5.5V)<ref>https://www.seeedstudio.com/I2C-CAN-Bus-Module-p-5054.html</ref><ref>https://www.lcsc.com/product-detail/CAN-ICs_Microchip-Tech-MCP2551-I-SN_C7376.html</ref>
** TLE6250G: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN_Infineon_TLE6250G_TLE6250G_C111030.html </ref>
** TJA1051: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_NXP-Semicon-TJA1051TK-3-118_C124020.html </ref>
** SN65HVD230: CAN transceiver (3V~3.6V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_Texas-Instruments-SN65HVD230QDR_C16468.html</ref>
* Power regulation
** AMS 1117
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
*** Gyroscope and Accelerometer: LSM6DSL, compatible with SPI and I2C
*** Magnetometer: LIS3MDL, compatible with SPI and I2C
* Ethernet (to communicate with the head and for debugging)
=== References ===
<references/>
[[Category: Electronics]]
c49ade2b822aa3cf5a621d3eccd5311dd3d0cf67
788
787
2024-05-01T04:26:03Z
Ben
2
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
* Video
* Multiplexer
* GMSL transmission
** MAX96701 serializer
** MAX96700 deserializer
== Body ==
[[File:Raspberry Pi CAN hat.jpg|thumb|right|Raspberry Pi CAN Hat]]
* Compute Module
* CAN
** MCP2515: CAN Controller, SPI compatible (2.7V~5.5V) <ref>https://www.lcsc.com/product-detail/CAN_MICROCHIP_MCP2515-I-SO_MCP2515-I-SO_C12368.html</ref>
** ATA6561: CAN Transceiver (4.5V~5.5V)<ref> https://www.lcsc.com/product-detail/CAN-ICs_Microchip-Tech-ATA6561-GAQW-N_C616016.html</ref>
** MCP2551: CAN Transceiver (4.5V~5.5V)<ref>https://www.seeedstudio.com/I2C-CAN-Bus-Module-p-5054.html</ref><ref>https://www.lcsc.com/product-detail/CAN-ICs_Microchip-Tech-MCP2551-I-SN_C7376.html</ref>
** TLE6250G: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN_Infineon_TLE6250G_TLE6250G_C111030.html </ref>
** TJA1051: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_NXP-Semicon-TJA1051TK-3-118_C124020.html </ref>
** SN65HVD230: CAN transceiver (3V~3.6V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_Texas-Instruments-SN65HVD230QDR_C16468.html</ref>
* Power regulation
** AMS 1117
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
*** Gyroscope and Accelerometer: LSM6DSL, compatible with SPI and I2C
*** Magnetometer: LIS3MDL, compatible with SPI and I2C
* Ethernet (to communicate with the head and for debugging)
=== References ===
<references/>
[[Category: Electronics]]
1129e9859cbfe49b005766ed36910a53ec9d16b4
789
788
2024-05-01T04:28:27Z
Ben
2
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
* Video
* Multiplexer
* GMSL transmission
** MAX96701 serializer
** MAX96700 deserializer
** MAX9295D dual 4-Lane MIPI CSI-2 to GMSL
== Body ==
[[File:Raspberry Pi CAN hat.jpg|thumb|right|Raspberry Pi CAN Hat]]
* Compute Module
* CAN
** MCP2515: CAN Controller, SPI compatible (2.7V~5.5V) <ref>https://www.lcsc.com/product-detail/CAN_MICROCHIP_MCP2515-I-SO_MCP2515-I-SO_C12368.html</ref>
** ATA6561: CAN Transceiver (4.5V~5.5V)<ref> https://www.lcsc.com/product-detail/CAN-ICs_Microchip-Tech-ATA6561-GAQW-N_C616016.html</ref>
** MCP2551: CAN Transceiver (4.5V~5.5V)<ref>https://www.seeedstudio.com/I2C-CAN-Bus-Module-p-5054.html</ref><ref>https://www.lcsc.com/product-detail/CAN-ICs_Microchip-Tech-MCP2551-I-SN_C7376.html</ref>
** TLE6250G: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN_Infineon_TLE6250G_TLE6250G_C111030.html </ref>
** TJA1051: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_NXP-Semicon-TJA1051TK-3-118_C124020.html </ref>
** SN65HVD230: CAN transceiver (3V~3.6V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_Texas-Instruments-SN65HVD230QDR_C16468.html</ref>
* Power regulation
** AMS 1117
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
*** Gyroscope and Accelerometer: LSM6DSL, compatible with SPI and I2C
*** Magnetometer: LIS3MDL, compatible with SPI and I2C
* Ethernet (to communicate with the head and for debugging)
=== References ===
<references/>
[[Category: Electronics]]
e605c052f57e043f43202c1f478a9e309c408cb5
790
789
2024-05-01T04:28:44Z
Ben
2
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
* Video
* Multiplexer
* GMSL transmission
** MAX96701 serializer
** MAX96700 deserializer
** MAX9295D dual 4-Lane MIPI CSI-2 to GMSL<ref>https://www.analog.com/en/products/max9295d.html</ref>
== Body ==
[[File:Raspberry Pi CAN hat.jpg|thumb|right|Raspberry Pi CAN Hat]]
* Compute Module
* CAN
** MCP2515: CAN Controller, SPI compatible (2.7V~5.5V) <ref>https://www.lcsc.com/product-detail/CAN_MICROCHIP_MCP2515-I-SO_MCP2515-I-SO_C12368.html</ref>
** ATA6561: CAN Transceiver (4.5V~5.5V)<ref> https://www.lcsc.com/product-detail/CAN-ICs_Microchip-Tech-ATA6561-GAQW-N_C616016.html</ref>
** MCP2551: CAN Transceiver (4.5V~5.5V)<ref>https://www.seeedstudio.com/I2C-CAN-Bus-Module-p-5054.html</ref><ref>https://www.lcsc.com/product-detail/CAN-ICs_Microchip-Tech-MCP2551-I-SN_C7376.html</ref>
** TLE6250G: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN_Infineon_TLE6250G_TLE6250G_C111030.html </ref>
** TJA1051: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_NXP-Semicon-TJA1051TK-3-118_C124020.html </ref>
** SN65HVD230: CAN transceiver (3V~3.6V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_Texas-Instruments-SN65HVD230QDR_C16468.html</ref>
* Power regulation
** AMS 1117
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
*** Gyroscope and Accelerometer: LSM6DSL, compatible with SPI and I2C
*** Magnetometer: LIS3MDL, compatible with SPI and I2C
* Ethernet (to communicate with the head and for debugging)
=== References ===
<references/>
[[Category: Electronics]]
5fb4efee4815a2bb5d1fe0b5811fec9039545794
792
790
2024-05-01T05:34:48Z
Matt
16
Add Audio Information
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
** DSP and Audio Codec
*** Analog Devices ADAU1452 <ref>https://www.lcsc.com/product-detail/Audio-Interface-ICs_Analog-Devices-ADAU1452KCPZ_C468504.html</ref>
** Amplifier
*** Texas Instruments TPA3116D2 <ref>https://www.lcsc.com/product-detail/Audio-Power-OpAmps_Texas-Instruments-TPA3116D2DADR_C50144.html</ref>
** Microphone
*** Infineon IM69D130 <ref>https://www.lcsc.com/product-detail/MEMS-Microphones_Infineon-Technologies-IM69D130V01_C536262.html</ref>
**Speaker
*** FaitalPRO 3FE25 3" Full-Range Speaker Driver<ref>https://faitalpro.com/en/products/LF_Loudspeakers/product_details/index.php?id=401000150</ref>
***
* Video
* Multiplexer
* GMSL transmission
** MAX96701 serializer
** MAX96700 deserializer
** MAX9295D dual 4-Lane MIPI CSI-2 to GMSL<ref>https://www.analog.com/en/products/max9295d.html</ref>
== Body ==
[[File:Raspberry Pi CAN hat.jpg|thumb|right|Raspberry Pi CAN Hat]]
* Compute Module
* CAN
** MCP2515: CAN Controller, SPI compatible (2.7V~5.5V) <ref>https://www.lcsc.com/product-detail/CAN_MICROCHIP_MCP2515-I-SO_MCP2515-I-SO_C12368.html</ref>
** ATA6561: CAN Transceiver (4.5V~5.5V)<ref> https://www.lcsc.com/product-detail/CAN-ICs_Microchip-Tech-ATA6561-GAQW-N_C616016.html</ref>
** MCP2551: CAN Transceiver (4.5V~5.5V)<ref>https://www.seeedstudio.com/I2C-CAN-Bus-Module-p-5054.html</ref><ref>https://www.lcsc.com/product-detail/CAN-ICs_Microchip-Tech-MCP2551-I-SN_C7376.html</ref>
** TLE6250G: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN_Infineon_TLE6250G_TLE6250G_C111030.html </ref>
** TJA1051: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_NXP-Semicon-TJA1051TK-3-118_C124020.html </ref>
** SN65HVD230: CAN transceiver (3V~3.6V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_Texas-Instruments-SN65HVD230QDR_C16468.html</ref>
* Power regulation
** AMS 1117
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
*** Gyroscope and Accelerometer: LSM6DSL, compatible with SPI and I2C
*** Magnetometer: LIS3MDL, compatible with SPI and I2C
* Ethernet (to communicate with the head and for debugging)
=== References ===
<references/>
[[Category: Electronics]]
a696d94a49e8afbc59d86511152fa6c354445e1d
794
792
2024-05-01T06:30:30Z
Matt
16
Add more chips
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
** DSP and Audio Codec
*** Analog Devices ADAU1452 <ref>https://www.lcsc.com/product-detail/Audio-Interface-ICs_Analog-Devices-ADAU1452KCPZ_C468504.html</ref>
** Digital to Analog Converter
*** Texas Instruments PCM5102A <ref> https://www.lcsc.com/product-detail/Digital-To-Analog-Converters-DACs_Texas-Instruments-Texas-Instruments-PCM5102APWR_C107671.html </ref>
** Amplifier
*** Texas Instruments TPA3116D2 <ref>https://www.lcsc.com/product-detail/Audio-Power-OpAmps_Texas-Instruments-TPA3116D2DADR_C50144.html</ref>
** Microphone
*** 2x Infineon IM69D130 <ref>https://www.lcsc.com/product-detail/MEMS-Microphones_Infineon-Technologies-IM69D130V01_C536262.html</ref>
**Speaker
*** FaitalPRO 3FE25 3" Full-Range Speaker Driver<ref>https://faitalpro.com/en/products/LF_Loudspeakers/product_details/index.php?id=401000150</ref>
***
* Video
** 2x MIPI CSI-2 Cameras
* Multiplexer
** Video
*** MAX9286
***
* GMSL transmission
** MAX96701 serializer
** MAX96700 deserializer
** MAX9295D dual 4-Lane MIPI CSI-2 to GMSL<ref>https://www.analog.com/en/products/max9295d.html</ref>
== Body ==
[[File:Raspberry Pi CAN hat.jpg|thumb|right|Raspberry Pi CAN Hat]]
* Compute Module
* CAN
** MCP2515: CAN Controller, SPI compatible (2.7V~5.5V) <ref>https://www.lcsc.com/product-detail/CAN_MICROCHIP_MCP2515-I-SO_MCP2515-I-SO_C12368.html</ref>
** ATA6561: CAN Transceiver (4.5V~5.5V)<ref> https://www.lcsc.com/product-detail/CAN-ICs_Microchip-Tech-ATA6561-GAQW-N_C616016.html</ref>
** MCP2551: CAN Transceiver (4.5V~5.5V)<ref>https://www.seeedstudio.com/I2C-CAN-Bus-Module-p-5054.html</ref><ref>https://www.lcsc.com/product-detail/CAN-ICs_Microchip-Tech-MCP2551-I-SN_C7376.html</ref>
** TLE6250G: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN_Infineon_TLE6250G_TLE6250G_C111030.html </ref>
** TJA1051: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_NXP-Semicon-TJA1051TK-3-118_C124020.html </ref>
** SN65HVD230: CAN transceiver (3V~3.6V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_Texas-Instruments-SN65HVD230QDR_C16468.html</ref>
* Power regulation
** AMS 1117
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
*** Gyroscope and Accelerometer: LSM6DSL, compatible with SPI and I2C
*** Magnetometer: LIS3MDL, compatible with SPI and I2C
* Ethernet (to communicate with the head and for debugging)
=== References ===
<references/>
[[Category: Electronics]]
1e2a27be83b54e7612f51a998a4cfa35163a2703
795
794
2024-05-01T06:46:29Z
Matt
16
/* Head */
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
** DSP and Audio Codec
*** Analog Devices ADAU1452 <ref>https://www.lcsc.com/product-detail/Audio-Interface-ICs_Analog-Devices-ADAU1452KCPZ_C468504.html</ref>
** Digital to Analog Converter
*** Texas Instruments PCM5102A <ref> https://www.lcsc.com/product-detail/Digital-To-Analog-Converters-DACs_Texas-Instruments-Texas-Instruments-PCM5102APWR_C107671.html </ref>
** Amplifier
*** Texas Instruments TPA3116D2 <ref>https://www.lcsc.com/product-detail/Audio-Power-OpAmps_Texas-Instruments-TPA3116D2DADR_C50144.html</ref>
** Microphone
*** 2x Infineon IM69D130 <ref>https://www.lcsc.com/product-detail/MEMS-Microphones_Infineon-Technologies-IM69D130V01_C536262.html</ref>
**Speaker
*** FaitalPRO 3FE25 3" Full-Range Speaker Driver<ref>https://faitalpro.com/en/products/LF_Loudspeakers/product_details/index.php?id=401000150</ref>
***
-- Need audio cable connection
-- Need Power connection
* Video
** 2x MIPI CSI-2 Cameras
* Multiplexer
** Video
*** MAX9286?
***
* GMSL transmission
** MAX96701 serializer
** MAX96700 deserializer
** MAX9295D dual 4-Lane MIPI CSI-2 to GMSL<ref>https://www.analog.com/en/products/max9295d.html</ref>
== Body ==
[[File:Raspberry Pi CAN hat.jpg|thumb|right|Raspberry Pi CAN Hat]]
* Compute Module
* CAN
** MCP2515: CAN Controller, SPI compatible (2.7V~5.5V) <ref>https://www.lcsc.com/product-detail/CAN_MICROCHIP_MCP2515-I-SO_MCP2515-I-SO_C12368.html</ref>
** ATA6561: CAN Transceiver (4.5V~5.5V)<ref> https://www.lcsc.com/product-detail/CAN-ICs_Microchip-Tech-ATA6561-GAQW-N_C616016.html</ref>
** MCP2551: CAN Transceiver (4.5V~5.5V)<ref>https://www.seeedstudio.com/I2C-CAN-Bus-Module-p-5054.html</ref><ref>https://www.lcsc.com/product-detail/CAN-ICs_Microchip-Tech-MCP2551-I-SN_C7376.html</ref>
** TLE6250G: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN_Infineon_TLE6250G_TLE6250G_C111030.html </ref>
** TJA1051: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_NXP-Semicon-TJA1051TK-3-118_C124020.html </ref>
** SN65HVD230: CAN transceiver (3V~3.6V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_Texas-Instruments-SN65HVD230QDR_C16468.html</ref>
* Power regulation
** AMS 1117
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
*** Gyroscope and Accelerometer: LSM6DSL, compatible with SPI and I2C
*** Magnetometer: LIS3MDL, compatible with SPI and I2C
* Ethernet (to communicate with the head and for debugging)
=== References ===
<references/>
[[Category: Electronics]]
dd928b3a3f935d2c06f17d2f9ffea93311c4bb9a
796
795
2024-05-01T06:51:28Z
Matt
16
add needed parts
wikitext
text/x-wiki
This document describes the PCBs that we use in Stompy
== Head ==
* Audio
** DSP and Audio Codec
*** Analog Devices ADAU1452 <ref>https://www.lcsc.com/product-detail/Audio-Interface-ICs_Analog-Devices-ADAU1452KCPZ_C468504.html</ref>
** Digital to Analog Converter
*** Texas Instruments PCM5102A <ref> https://www.lcsc.com/product-detail/Digital-To-Analog-Converters-DACs_Texas-Instruments-Texas-Instruments-PCM5102APWR_C107671.html </ref>
** Amplifier
*** Texas Instruments TPA3116D2 <ref>https://www.lcsc.com/product-detail/Audio-Power-OpAmps_Texas-Instruments-TPA3116D2DADR_C50144.html</ref>
** Microphone
*** 2x Infineon IM69D130 <ref>https://www.lcsc.com/product-detail/MEMS-Microphones_Infineon-Technologies-IM69D130V01_C536262.html</ref>
**Speaker
*** FaitalPRO 3FE25 3" Full-Range Speaker Driver<ref>https://faitalpro.com/en/products/LF_Loudspeakers/product_details/index.php?id=401000150</ref>
***
-- Need audio cable connection
-- Need Power connection
-- Need SPI Connections
* Video
** 2x MIPI CSI-2 Cameras
* Multiplexer
** Video
*** MAX9286?
***
* GMSL transmission
** MAX96701 serializer
** MAX96700 deserializer
** MAX9295D dual 4-Lane MIPI CSI-2 to GMSL<ref>https://www.analog.com/en/products/max9295d.html</ref>
== Body ==
[[File:Raspberry Pi CAN hat.jpg|thumb|right|Raspberry Pi CAN Hat]]
* Compute Module
* CAN
** MCP2515: CAN Controller, SPI compatible (2.7V~5.5V) <ref>https://www.lcsc.com/product-detail/CAN_MICROCHIP_MCP2515-I-SO_MCP2515-I-SO_C12368.html</ref>
** ATA6561: CAN Transceiver (4.5V~5.5V)<ref> https://www.lcsc.com/product-detail/CAN-ICs_Microchip-Tech-ATA6561-GAQW-N_C616016.html</ref>
** MCP2551: CAN Transceiver (4.5V~5.5V)<ref>https://www.seeedstudio.com/I2C-CAN-Bus-Module-p-5054.html</ref><ref>https://www.lcsc.com/product-detail/CAN-ICs_Microchip-Tech-MCP2551-I-SN_C7376.html</ref>
** TLE6250G: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN_Infineon_TLE6250G_TLE6250G_C111030.html </ref>
** TJA1051: CAN Transceiver (4.5V~5.5V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_NXP-Semicon-TJA1051TK-3-118_C124020.html </ref>
** SN65HVD230: CAN transceiver (3V~3.6V)<ref>https://www.lcsc.com/product-detail/CAN-ICs_Texas-Instruments-SN65HVD230QDR_C16468.html</ref>
* Power regulation
** AMS 1117
* IMU
** [https://ozzmaker.com/product/berrygps-imu/ BerryGPS-IMU V4]
*** Gyroscope and Accelerometer: LSM6DSL, compatible with SPI and I2C
*** Magnetometer: LIS3MDL, compatible with SPI and I2C
* Ethernet (to communicate with the head and for debugging)
=== References ===
<references/>
[[Category: Electronics]]
0b6fe4795c8f44a694a779580b0b7c36e8ca085a
Applying for SBIR Grants
0
189
791
2024-05-01T05:27:56Z
Ben
2
Created page with "This page provides some notes on how to apply for SBIR grants. [[Category: Stompy, Expand!]]"
wikitext
text/x-wiki
This page provides some notes on how to apply for SBIR grants.
[[Category: Stompy, Expand!]]
1ab18ff9c137e0ca0bbbcdb7040c7e574eb65f75
793
791
2024-05-01T06:04:11Z
Stompy
14
Bot expanded article
wikitext
text/x-wiki
== Applying for SBIR Grants ==
The Small Business Innovation Research (SBIR) program is a United States government program, coordinated by the Small Business Administration (SBA), intended to assist small businesses to engage in Federal Research/Research and Development (R/R&D) activities. The program aims to enhance both the participation of small businesses in Federal R/R&D and the commercialization potentials of their R&D efforts.<ref name="SBA">[https://www.sba.gov/funding-programs/research-grants/small-business-innovation-research-program Small Business Administration: Small Business Innovation Research Program]</ref>.
=== Eligibility Criteria ===
To qualify for SBIR grants, applicants should meet several criteria:
* Be a for-profit organization.
* Have fewer than 500 employees, including affiliates.
* Be independently owned and operated, not dominated in its field nationwide.
* The principal researcher must be employed by the small business.
* The company must be located in the United States.
* The small business must be majority U.S. owned by individuals and independently operated.<ref name="SBIR_Overview">[https://www.sbir.gov/overview Small Business Innovation Research (SBIR) Overview]</ref>.
=== Grant Application Process ===
1. Identification of an SBIR funding opportunity at one of the federal agencies that participate in the SBIR program.
2. Registration with the System for Award Management (SAM), SBA Company registry, and the SBIR-specific registration site (e.g., eRA Commons for NIH).
3. Submission of a detailed proposal in response to a specific agency's SBIR opportunity, following the guidelines provided by the respective agency.
4. Peer Review of the submitted proposal.
5. Awarding of the SBIR grant and execution of the proposed R/R&D plan.<ref name="SBAApplication">[https://www.sba.gov/federal-contracting/contracting-guide/small-business-innovation-research-and-small-business-technology-transfer Small Business Administration: Small Business Innovation Research and Small Business Technology Transfer]</ref>.
{{infobox grant
| name = Small Business Innovation Research (SBIR) Grants
| administered_by = Small Business Administration (SBA)
| eligibility_link = https://www.sbir.gov/eligibility
| application_process_link = https://www.sba.gov/federal-contracting/contracting-guide/small-business-innovation-research-and-small-business-technology-transfer
| grant_amount_info = Varies by grant solicitation and agency
| catchment_area = United States
}}
[[Category: Business Funding]]
[[Category: Small and Medium Businesses]]
[[Category: Innovation]]
== References ==
<references />
f50857be2879e10c73621c799e29af391c2e7470
797
793
2024-05-02T00:24:01Z
Ben
2
wikitext
text/x-wiki
== Applying for SBIR Grants ==
The Small Business Innovation Research (SBIR) program is a United States government program, coordinated by the Small Business Administration (SBA), intended to assist small businesses to engage in Federal Research/Research and Development (R/R&D) activities. The program aims to enhance both the participation of small businesses in Federal R/R&D and the commercialization potentials of their R&D efforts.<ref name="SBA">[https://www.sba.gov/funding-programs/research-grants/small-business-innovation-research-program Small Business Administration: Small Business Innovation Research Program]</ref>.
=== Eligibility Criteria ===
To qualify for SBIR grants, applicants should meet several criteria:
* Be a for-profit organization.
* Have fewer than 500 employees, including affiliates.
* Be independently owned and operated, not dominated in its field nationwide.
* The principal researcher must be employed by the small business.
* The company must be located in the United States.
* The small business must be majority U.S. owned by individuals and independently operated.<ref name="SBIR_Overview">[https://www.sbir.gov/overview Small Business Innovation Research (SBIR) Overview]</ref>.
=== Grant Application Process ===
1. Identification of an SBIR funding opportunity at one of the federal agencies that participate in the SBIR program.
2. Registration with the System for Award Management (SAM), SBA Company registry, and the SBIR-specific registration site (e.g., eRA Commons for NIH).
3. Submission of a detailed proposal in response to a specific agency's SBIR opportunity, following the guidelines provided by the respective agency.
4. Peer Review of the submitted proposal.
5. Awarding of the SBIR grant and execution of the proposed R/R&D plan.<ref name="SBAApplication">[https://www.sba.gov/federal-contracting/contracting-guide/small-business-innovation-research-and-small-business-technology-transfer Small Business Administration: Small Business Innovation Research and Small Business Technology Transfer]</ref>.
== References ==
<references />
01d800cab727d86ca1899222b4591c52ae2f9971
799
797
2024-05-02T04:35:59Z
Ben
2
wikitext
text/x-wiki
== Applying for SBIR Grants ==
The Small Business Innovation Research (SBIR) program is a United States government program, coordinated by the Small Business Administration (SBA), intended to assist small businesses to engage in Federal Research/Research and Development (R/R&D) activities. The program aims to enhance both the participation of small businesses in Federal R/R&D and the commercialization potentials of their R&D efforts.<ref name="SBA">[https://www.sba.gov/funding-programs/research-grants/small-business-innovation-research-program Small Business Administration: Small Business Innovation Research Program]</ref>.
=== Eligibility Criteria ===
To qualify for SBIR grants, applicants should meet several criteria:
* Be a for-profit organization.
* Have fewer than 500 employees, including affiliates.
* Be independently owned and operated, not dominated in its field nationwide.
* The principal researcher must be employed by the small business.
* The company must be located in the United States.
* The small business must be majority U.S. owned by individuals and independently operated.<ref name="SBIR_Overview">[https://www.sbir.gov/overview Small Business Innovation Research (SBIR) Overview]</ref>.
=== Grant Application Process ===
# Identification of an SBIR funding opportunity at one of the federal agencies that participate in the SBIR program.
# Registration with the System for Award Management (SAM), SBA Company registry, and the SBIR-specific registration site (e.g., eRA Commons for NIH).
# Submission of a detailed proposal in response to a specific agency's SBIR opportunity, following the guidelines provided by the respective agency.
# Peer Review of the submitted proposal.
# Awarding of the SBIR grant and execution of the proposed R/R&D plan.<ref name="SBAApplication">[https://www.sba.gov/federal-contracting/contracting-guide/small-business-innovation-research-and-small-business-technology-transfer Small Business Administration: Small Business Innovation Research and Small Business Technology Transfer]</ref>.
== References ==
<references />
ce4eedfca58f25ad287850ebffbc0ee0665f54ea
Servo Design
0
61
798
394
2024-05-02T00:44:49Z
108.211.178.220
0
wikitext
text/x-wiki
This page contains information about how to build a good open-source servo.
=== Open Source Servos ===
* [https://github.com/unhuman-io/obot OBot]
* [https://github.com/atopile/spin-servo-drive SPIN Servo Drive]
* [https://github.com/jcchurch13/Mechaduino-Firmware Mechaduino-Firmware]
=== Commercial Servos ===
* [[MyActuator X-Series]]
=== Controllers ===
* [https://github.com/napowderly/mcp2515 MCP2515 tranceiver circuit]
* [https://github.com/adamb314/ServoProject ServoProject: making a standard servo 36x more accurate]
[[Category: Hardware]]
[[Category: Electronics]]
[[Category: Actuators]]
5428327e83b3f071efee3098db53dfade061665a
Hyperspawn Robotics
0
190
800
2024-05-02T09:35:16Z
14.142.236.26
0
Created page with "[[File:Https://assets-global.website-files.com/60a8e50573e2d780c83ef782/62bf03c35a8471fbacf98b64 Picture1-p-500.png|thumb|Hyperspawn Logo]] Hyperspawn Robotics builds intellig..."
wikitext
text/x-wiki
[[File:Https://assets-global.website-files.com/60a8e50573e2d780c83ef782/62bf03c35a8471fbacf98b64 Picture1-p-500.png|thumb|Hyperspawn Logo]]
Hyperspawn Robotics builds intelligent humanoid robots that can operate in two modes: autonomous and teleoperated; allowing the humanoids to function independently or be controlled remotely by a user immersively. The robots use advanced AI and vision LLMs to navigate, make decisions, and execute tasks independently. When in teleoperated mode, users control the robot via VR gear and a motion-tracking suit, providing real-time, intuitive control and enabling a physical presence in inaccessible or hazardous locations.
{{infobox company
| name = Hyperspawn Robotics
| website_link = [https://hyperspawn.co hyperspawn.co]
| robots = [[Shadow-1]]
}}
=== Overview ===
Hyperspawn's humanoid teleoperation system is a reflection of human agility and intelligence, designed to operate in hazardous environments where human intervention is risky. This humanoid robot, controlled via a motion-tracking suit & VR headset, accurately mimics human actions, ensuring precise, and efficient operations in high-risk zones in real time. The design is a result of deep research, collaboration, and a passion for leveraging technology to elevate human lives.
=== Shadow-1 ===
[[Shadow-1]], our flagship humanoid robot, is designed to seamlessly integrate into human workspaces. It has been developed with advanced sensor technology and AI-driven functionalities, enabling it to perform tasks ranging from simple material handling to complex problem-solving scenarios that require interaction with human counterparts.
Shadow-1 is designed for both autonomous and teleoperated modes. Equipped with advanced sensors and actuators, it can navigate complex environments and perform detailed tasks. Its teleoperation capabilities, allows users to control the robot remotely with high precision and real-time feedback, effectively placing them anywhere Shadow-1 can go. This makes it ideal for applications in hazardous conditions, remote assistance, and scenarios requiring human-like interaction without the risks.
=== Global Impact ===
Hyperspawn's intent is to have more humanoids in the world for better real world experience for humans. It will be the first real, interactive and fluid device which has the capability of taking physical actions on your behalf and having a personality and aligning to yours well.
Teleoperation feature is the best way to enter peoples life as other people ( physical video calls) and will lead to more and better social connections and even be very useful in B2B applications like Manufacturing Facilities, Defense, etc.
=== Future Vision ===
Hyperspawn Robotics envisions a world with infinite mobility for humanity. Where humans can physically interact from anywhere in the world without the constraints of space and time. Where tasks in extreme conditions, be it in defense, healthcare, space exploration, etc., are seamlessly executed by humanoid robots, ensuring not only human safety but also guaranteed efficiency.
And, this is just an innovative stepping stone in our journey to develop the most advanced autonomous humanoids, harnessing the latest advances in AI and integrating them seamlessly into robotics. Sounds too futuristic? We’re making it happen!
[[Category:Companies]]
4095901ab0c9d4e4cdc25dce0df06c8268caea98
802
800
2024-05-02T09:39:37Z
Evolve
20
wikitext
text/x-wiki
[[File:HyperspawnRoboticsLogo.png|thumb|150px|Hyperspawn Robotics]]
Hyperspawn Robotics builds intelligent humanoid robots that can operate in two modes: autonomous and teleoperated; allowing the humanoids to function independently or be controlled remotely by a user immersively. The robots use advanced AI and vision LLMs to navigate, make decisions, and execute tasks independently. When in teleoperated mode, users control the robot via VR gear and a motion-tracking suit, providing real-time, intuitive control and enabling a physical presence in inaccessible or hazardous locations.
{{infobox company
| name = Hyperspawn Robotics
| website_link = [https://hyperspawn.co hyperspawn.co]
| robots = [[Shadow-1]]
}}
=== Overview ===
Hyperspawn's humanoid teleoperation system is a reflection of human agility and intelligence, designed to operate in hazardous environments where human intervention is risky. This humanoid robot, controlled via a motion-tracking suit & VR headset, accurately mimics human actions, ensuring precise, and efficient operations in high-risk zones in real time. The design is a result of deep research, collaboration, and a passion for leveraging technology to elevate human lives.
=== Shadow-1 ===
[[Shadow-1]], our flagship humanoid robot, is designed to seamlessly integrate into human workspaces. It has been developed with advanced sensor technology and AI-driven functionalities, enabling it to perform tasks ranging from simple material handling to complex problem-solving scenarios that require interaction with human counterparts.
Shadow-1 is designed for both autonomous and teleoperated modes. Equipped with advanced sensors and actuators, it can navigate complex environments and perform detailed tasks. Its teleoperation capabilities, allows users to control the robot remotely with high precision and real-time feedback, effectively placing them anywhere Shadow-1 can go. This makes it ideal for applications in hazardous conditions, remote assistance, and scenarios requiring human-like interaction without the risks.
=== Global Impact ===
Hyperspawn's intent is to have more humanoids in the world for better real world experience for humans. It will be the first real, interactive and fluid device which has the capability of taking physical actions on your behalf and having a personality and aligning to yours well.
Teleoperation feature is the best way to enter peoples life as other people ( physical video calls) and will lead to more and better social connections and even be very useful in B2B applications like Manufacturing Facilities, Defense, etc.
=== Future Vision ===
Hyperspawn Robotics envisions a world with infinite mobility for humanity. Where humans can physically interact from anywhere in the world without the constraints of space and time. Where tasks in extreme conditions, be it in defense, healthcare, space exploration, etc., are seamlessly executed by humanoid robots, ensuring not only human safety but also guaranteed efficiency.
And, this is just an innovative stepping stone in our journey to develop the most advanced autonomous humanoids, harnessing the latest advances in AI and integrating them seamlessly into robotics. Sounds too futuristic? We’re making it happen!
[[Category:Companies]]
5d035545551679292d2c657839a530c6d42c4c7a
805
802
2024-05-02T21:54:15Z
Ben
2
wikitext
text/x-wiki
[[File:HyperspawnRoboticsLogo.png|thumb|150px|Hyperspawn Robotics]]
Hyperspawn Robotics builds intelligent humanoid robots that can operate in two modes: autonomous and teleoperated; allowing the humanoids to function independently or be controlled remotely by a user immersively. The robots use advanced AI and vision LLMs to navigate, make decisions, and execute tasks independently. When in teleoperated mode, users control the robot via VR gear and a motion-tracking suit, providing real-time, intuitive control and enabling a physical presence in inaccessible or hazardous locations.
{{infobox company
| name = Hyperspawn Robotics
| website_link = https://hyperspawn.co
| robots = [[Shadow-1]]
}}
=== Overview ===
Hyperspawn's humanoid teleoperation system is a reflection of human agility and intelligence, designed to operate in hazardous environments where human intervention is risky. This humanoid robot, controlled via a motion-tracking suit & VR headset, accurately mimics human actions, ensuring precise, and efficient operations in high-risk zones in real time. The design is a result of deep research, collaboration, and a passion for leveraging technology to elevate human lives.
=== Shadow-1 ===
[[Shadow-1]], our flagship humanoid robot, is designed to seamlessly integrate into human workspaces. It has been developed with advanced sensor technology and AI-driven functionalities, enabling it to perform tasks ranging from simple material handling to complex problem-solving scenarios that require interaction with human counterparts.
Shadow-1 is designed for both autonomous and teleoperated modes. Equipped with advanced sensors and actuators, it can navigate complex environments and perform detailed tasks. Its teleoperation capabilities, allows users to control the robot remotely with high precision and real-time feedback, effectively placing them anywhere Shadow-1 can go. This makes it ideal for applications in hazardous conditions, remote assistance, and scenarios requiring human-like interaction without the risks.
=== Global Impact ===
Hyperspawn's intent is to have more humanoids in the world for better real world experience for humans. It will be the first real, interactive and fluid device which has the capability of taking physical actions on your behalf and having a personality and aligning to yours well.
Teleoperation feature is the best way to enter peoples life as other people ( physical video calls) and will lead to more and better social connections and even be very useful in B2B applications like Manufacturing Facilities, Defense, etc.
=== Future Vision ===
Hyperspawn Robotics envisions a world with infinite mobility for humanity. Where humans can physically interact from anywhere in the world without the constraints of space and time. Where tasks in extreme conditions, be it in defense, healthcare, space exploration, etc., are seamlessly executed by humanoid robots, ensuring not only human safety but also guaranteed efficiency.
And, this is just an innovative stepping stone in our journey to develop the most advanced autonomous humanoids, harnessing the latest advances in AI and integrating them seamlessly into robotics. Sounds too futuristic? We’re making it happen!
[[Category:Companies]]
4d95b42bf03a3d121274ad1e5ccb73233847d32c
File:HyperspawnRoboticsLogo.png
6
191
801
2024-05-02T09:37:34Z
Evolve
20
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Main Page
0
1
803
700
2024-05-02T18:46:13Z
Jos
17
/* Getting Started */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
Here are some resources to get started learning about humanoid robots.
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
|
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|}
77779a547fddb8bad0a1ff053ccfd2185cb516a3
806
803
2024-05-03T15:50:11Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
|
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|}
2fcc461504a84faeb5c1207692973d5f843026ed
810
806
2024-05-03T17:10:54Z
129.97.185.73
0
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
|
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|}
01a22aa90f0cf5977d1dd7b5433de6ae22965bb4
815
810
2024-05-03T17:26:48Z
129.97.185.73
0
/* List of Actuators */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
|
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| An actuator built for the DEEP Robotics quadrupeds.
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|}
5db9be535742a2cb413bfe24c1c2102afcdafeac
817
815
2024-05-03T17:41:13Z
129.97.185.73
0
/* List of Actuators */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
|
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the DEEP Robotics quadrupeds.
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|}
aad57f7db9598578f42bc74471a72508c541e993
819
817
2024-05-03T17:45:52Z
129.97.185.73
0
/* List of Actuators */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
|
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|}
9930eefa069f44676308fdeba0b359488e179218
Category:Non-humanoid Robots
14
192
804
2024-05-02T18:47:25Z
Jos
17
Created page with "== Open Source == * https://open-dynamic-robot-initiative.github.io/"
wikitext
text/x-wiki
== Open Source ==
* https://open-dynamic-robot-initiative.github.io/
4ed9c587ac0752ad133771aa0c474d12420a5a08
Getting Started with Humanoid Robots
0
193
807
2024-05-03T15:50:51Z
Ben
2
Created page with "This is a build guide for getting started experimenting with your own humanoid robot. This is incomplete; you can help by expanding it!"
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
ff4b4af494c5d0c817cc65c2fed3dcbc2a613cd2
820
807
2024-05-03T20:13:40Z
Vrtnis
21
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
update:''work in progress - starting with a template, plan to expand on sections :)''
'''Getting Started with Building and Experimenting with Your Own Humanoid Robot
'''
'''Introduction'''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
Key Components and Tools
Fundamentals to Get You Started:
Humanoid Robot Anatomy: Understand the basics of sensors, actuators, and controllers. Consider what makes a humanoid robot function—from the planetary gear configurations in the joints to the complex sensor arrays for environmental interaction.
Simulation Tools:
ISAAC Sim by NVIDIA: Before assembling your robot, simulate your designs in ISAAC Sim. This tool is perfect for experimenting with different configurations in a controlled virtual environment, minimizing costs and potential damage to physical components.
Building Your Humanoid Robot
Selecting Components:
Actuators and Gearboxes: Whether you're looking at traditional servos or exploring advanced options like the MyActuator X-Series or cycloidal gears, understanding the torque, speed, and precision each component offers is crucial. Check out community-generated charts and databases for a breakdown of cost vs. performance.
Assembly Tips:
Community Forums: Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
Programming and Control
ROS (Robot Operating System): Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
Custom Software Solutions: Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
Experimenting with Your Humanoid Robot
Testing and Iteration:
Virtual before Physical: Use ISAAC Sim to test your designs under various simulated conditions to refine your robot's mechanics and electronics.
Real-World Testing: Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
Data Collection and Analysis:
Camera Systems: Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
Advanced Customization and Community Engagement
Open Source Projects: Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others, such as the Kayra project, a 3D printable open-source humanoid robot.
Modular Design: Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
Safety, Ethics, and Continuous Learning
Safety Protocols: Always implement robust safety measures when testing and demonstrating your robot.
f698531ee5696770ea613c467c9ed371902cf4d1
821
820
2024-05-03T20:19:28Z
Vrtnis
21
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
update:''work in progress - starting with a template, plan to expand on sections :)''
'''Getting Started with Building and Experimenting with Your Own Humanoid Robot
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== '''Key Components and Tools''' ==
===== Fundamentals to Get You Started =====
Humanoid Robot Anatomy: Understand the basics of sensors, actuators, and controllers. Consider what makes a humanoid robot function—from the planetary gear configurations in the joints to the complex sensor arrays for environmental interaction.
===== Simulation Tools: =====
ISAAC Sim by NVIDIA:
Before assembling your robot, simulate your designs in ISAAC Sim. This tool is perfect for experimenting with different configurations in a controlled virtual environment, minimizing costs and potential damage to physical components.
== Building Your Humanoid Robot ==
Selecting Components:
Actuators and Gearboxes: Whether you're looking at traditional servos or exploring advanced options like the MyActuator X-Series or cycloidal gears, understanding the torque, speed, and precision each component offers is crucial. Check out community-generated charts and databases for a breakdown of cost vs. performance.
Assembly Tips:
Community Forums: Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
Programming and Control
ROS (Robot Operating System): Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
Custom Software Solutions: Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
Experimenting with Your Humanoid Robot
Testing and Iteration:
Virtual before Physical: Use ISAAC Sim to test your designs under various simulated conditions to refine your robot's mechanics and electronics.
Real-World Testing: Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
Data Collection and Analysis:
Camera Systems: Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
Advanced Customization and Community Engagement
Open Source Projects: Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others, such as the Kayra project, a 3D printable open-source humanoid robot.
Modular Design: Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
Safety, Ethics, and Continuous Learning
Safety Protocols: Always implement robust safety measures when testing and demonstrating your robot.
e9964219991d1e74022ffef7ba64e97982d5e7e1
822
821
2024-05-03T20:21:58Z
Vrtnis
21
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
update:''work in progress - starting with a template, plan to expand on sections :)''
'''Getting Started with Building and Experimenting with Your Own Humanoid Robot
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Key Components and Tools ==
===== Fundamentals to Get You Started =====
Humanoid Robot Anatomy: Understand the basics of sensors, actuators, and controllers. Consider what makes a humanoid robot function—from the planetary gear configurations in the joints to the complex sensor arrays for environmental interaction.
===== Simulation Tools: =====
ISAAC Sim by NVIDIA:
Before assembling your robot, simulate your designs in ISAAC Sim. This tool is perfect for experimenting with different configurations in a controlled virtual environment, minimizing costs and potential damage to physical components.
== Building Your Humanoid Robot ==
Selecting Components:
Actuators and Gearboxes: Whether you're looking at traditional servos or exploring advanced options like the MyActuator X-Series or cycloidal gears, understanding the torque, speed, and precision each component offers is crucial. Check out community-generated charts and databases for a breakdown of cost vs. performance.
Assembly Tips:
Community Forums: Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
Programming and Control
ROS (Robot Operating System): Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
Custom Software Solutions: Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
Experimenting with Your Humanoid Robot
Testing and Iteration:
Virtual before Physical: Use ISAAC Sim to test your designs under various simulated conditions to refine your robot's mechanics and electronics.
Real-World Testing: Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
Data Collection and Analysis:
Camera Systems: Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
Advanced Customization and Community Engagement
Open Source Projects: Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others, such as the Kayra project, a 3D printable open-source humanoid robot.
Modular Design: Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
Safety, Ethics, and Continuous Learning
Safety Protocols: Always implement robust safety measures when testing and demonstrating your robot.
ad2ddd376428b10b34acfdd736e699caa2857810
823
822
2024-05-03T20:30:10Z
Vrtnis
21
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
update:''work in progress - starting with a template, plan to expand on sections :)''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Key Components and Tools ==
===== Fundamentals to Get You Started =====
Humanoid Robot Anatomy: Understand the basics of sensors, actuators, and controllers. Consider what makes a humanoid robot function—from the planetary gear configurations in the joints to the complex sensor arrays for environmental interaction.
===== Simulation Tools: =====
ISAAC Sim by NVIDIA:
Before assembling your robot, simulate your designs in ISAAC Sim. This tool is perfect for experimenting with different configurations in a controlled virtual environment, minimizing costs and potential damage to physical components.
== Building Your Humanoid Robot ==
===== '''Selecting Components'' =====
'
Actuators and Gearboxes: Whether you're looking at traditional servos or exploring advanced options like the MyActuator X-Series or cycloidal gears, understanding the torque, speed, and precision each component offers is crucial. Check out community-generated charts and databases for a breakdown of cost vs. performance.
Assembly Tips:
Community Forums: Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
Programming and Control
ROS (Robot Operating System): Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
Custom Software Solutions: Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
Experimenting with Your Humanoid Robot
Testing and Iteration:
Virtual before Physical: Use ISAAC Sim to test your designs under various simulated conditions to refine your robot's mechanics and electronics.
Real-World Testing: Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
Data Collection and Analysis:
Camera Systems: Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
Advanced Customization and Community Engagement
Open Source Projects: Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others, such as the Kayra project, a 3D printable open-source humanoid robot.
Modular Design: Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
Safety, Ethics, and Continuous Learning
Safety Protocols: Always implement robust safety measures when testing and demonstrating your robot.
a94a0d1f46b6df1b3580b16de2448a96b2d21f7c
824
823
2024-05-03T20:31:25Z
Vrtnis
21
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
update:''work in progress - starting with a template, plan to expand on sections :)''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Key Components and Tools ==
===== Fundamentals to Get You Started =====
Humanoid Robot Anatomy: Understand the basics of sensors, actuators, and controllers. Consider what makes a humanoid robot function—from the planetary gear configurations in the joints to the complex sensor arrays for environmental interaction.
===== Simulation Tools: =====
ISAAC Sim by NVIDIA:
Before assembling your robot, simulate your designs in ISAAC Sim. This tool is perfect for experimenting with different configurations in a controlled virtual environment, minimizing costs and potential damage to physical components.
== Building Your Humanoid Robot ==
=== Selecting Components ===
==== Actuators and Gearboxes ====
Whether you're looking at traditional servos or exploring advanced options like the MyActuator X-Series or cycloidal gears, understanding the torque, speed, and precision each component offers is crucial. Check out community-generated charts and databases for a breakdown of cost vs. performance.
Assembly Tips:
Community Forums: Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
Programming and Control
ROS (Robot Operating System): Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
Custom Software Solutions: Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
Experimenting with Your Humanoid Robot
Testing and Iteration:
Virtual before Physical: Use ISAAC Sim to test your designs under various simulated conditions to refine your robot's mechanics and electronics.
Real-World Testing: Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
Data Collection and Analysis:
Camera Systems: Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
Advanced Customization and Community Engagement
Open Source Projects: Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others, such as the Kayra project, a 3D printable open-source humanoid robot.
Modular Design: Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
Safety, Ethics, and Continuous Learning
Safety Protocols: Always implement robust safety measures when testing and demonstrating your robot.
1e5d8f0e61cc6378a613c68e7cff3facaf2e16a5
825
824
2024-05-03T20:38:17Z
Vrtnis
21
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
update:''work in progress - starting with a template, plan to expand on sections :)''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
=== Selecting Components ===
==== Actuators and Gearboxes ====
Whether you're looking at traditional servos or exploring advanced options like the MyActuator X-Series or cycloidal gears, understanding the torque, speed, and precision each component offers is crucial. Check out community-generated charts and databases for a breakdown of cost vs. performance.
==== Assembly Tips ====
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual before Physical ====
Use ISAAC Sim to test your designs under various simulated conditions to refine your robot's mechanics and electronics.
==== Real-World Testing ====
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others, such as the Kayra project, a 3D printable open-source humanoid robot.
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
==== Actuators and Gearboxes ====
Whether you're looking at traditional servos or exploring advanced options like the MyActuator X-Series or cycloidal gears, understanding the torque, speed, and precision each component offers is crucial. Check out community-generated charts and databases for a breakdown of cost vs. performance.
Assembly Tips:
Community Forums: Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
Programming and Control
ROS (Robot Operating System): Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
Custom Software Solutions: Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
Experimenting with Your Humanoid Robot
Testing and Iteration:
Virtual before Physical: Use ISAAC Sim to test your designs under various simulated conditions to refine your robot's mechanics and electronics.
Real-World Testing: Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
Data Collection and Analysis:
Camera Systems: Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
Advanced Customization and Community Engagement
Open Source Projects: Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others, such as the Kayra project, a 3D printable open-source humanoid robot.
Modular Design: Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
Safety, Ethics, and Continuous Learning
Safety Protocols: Always implement robust safety measures when testing and demonstrating your robot.
5a4bcff2ef6663f92db2634665e64b5bea7e4d3a
826
825
2024-05-03T20:38:53Z
Vrtnis
21
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
update:''work in progress - starting with a template, plan to expand on sections :)''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
=== Selecting Components ===
==== Actuators and Gearboxes ====
Whether you're looking at traditional servos or exploring advanced options like the MyActuator X-Series or cycloidal gears, understanding the torque, speed, and precision each component offers is crucial. Check out community-generated charts and databases for a breakdown of cost vs. performance.
==== Assembly Tips ====
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual before Physical ====
Use ISAAC Sim to test your designs under various simulated conditions to refine your robot's mechanics and electronics.
==== Real-World Testing ====
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others, such as the Kayra project, a 3D printable open-source humanoid robot.
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
9e8827f79564613254cba3c6074a893e7f1156d1
827
826
2024-05-03T20:41:23Z
Vrtnis
21
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
update:''work in progress - starting with a template, plan to expand on sections :)''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
=== Selecting Components ===
==== Actuators and Gearboxes ====
Whether you're looking at traditional servos or exploring advanced options like the MyActuator X-Series or cycloidal gears, understanding the torque, speed, and precision each component offers is crucial. Check out community-generated charts and databases for a breakdown of cost vs. performance.
==== Assembly Tips ====
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual before Physical ====
Use ISAAC Sim to test your designs under various simulated conditions to refine your robot's mechanics and electronics.
==== Real-World Testing ====
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
3feff07d7b536676b065eb293b56a5af0ddb627a
828
827
2024-05-03T21:25:58Z
Vrtnis
21
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
update:''work in progress - starting with a template, plan to expand on sections :)''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
=== Selecting Components ===
==== Actuators and Gearboxes ====
== Actuators in Humanoid Robotics: Insights and Innovations ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [SPIN project](https://github.com/atopile/spin-servo-drive) by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
=== Global Collaborations and Innovations ===
==== Actuator Design Finalization for Public Release ====
Collaborations, such as those discussed for finalizing actuator designs for public releases, highlight the global effort in refining robotic components. These collaborations often result in minor but crucial adjustments that enhance the overall functionality and integration of actuators into robotic systems.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual before Physical ====
Use ISAAC Sim to test your designs under various simulated conditions to refine your robot's mechanics and electronics.
==== Real-World Testing ====
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
9d8875084327c2dbc3b40f69011298347ea93da5
829
828
2024-05-03T21:27:38Z
Vrtnis
21
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
update:''work in progress - starting with a template, plan to expand on sections :)''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
=== Selecting Components ===
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [SPIN project](https://github.com/atopile/spin-servo-drive) by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual before Physical ====
Use ISAAC Sim to test your designs under various simulated conditions to refine your robot's mechanics and electronics.
==== Real-World Testing ====
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
af5cfcf90d94d5f43eeb78be516f9d765b2c3fc5
830
829
2024-05-03T21:30:02Z
Vrtnis
21
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
update:''work in progress - starting with a template, plan to expand on sections :)''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [SPIN project](https://github.com/atopile/spin-servo-drive) by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual before Physical ====
Use ISAAC Sim to test your designs under various simulated conditions to refine your robot's mechanics and electronics.
==== Real-World Testing ====
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
7e5f226b5e36d2e9fe249d8aa1074ed1d761c81f
831
830
2024-05-03T21:34:43Z
Vrtnis
21
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
update:''work in progress - starting with a template, plan to expand on sections :)''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [SPIN project](https://github.com/atopile/spin-servo-drive) by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual before Physical ====
Use ISAAC Sim to test your designs under various simulated conditions to refine your robot's mechanics and electronics.
==== Real-World Testing ====
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
2467a3776ddffb1b0c4f1723c451a0eb62d9dc34
K-Scale Cluster
0
16
808
718
2024-05-03T16:55:18Z
Ben
2
wikitext
text/x-wiki
The K-Scale Labs clusters are shared cluster for robotics research. This page contains notes on how to access the clusters.
=== Onboarding ===
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config</code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
After setting this up, you can use the command <code>ssh cluster</code> to directly connect.
You can also access via VS Code. Tutorial of using <code>ssh</code> in VS Code is [https://code.visualstudio.com/docs/remote/ssh-tutorial here].
Please inform us if you have any issues!
=== Notes ===
* You may need to restart <code>ssh</code> to get it working.
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
=== Cluster 1 ===
=== Cluster 2 ===
The cluster has 8 available nodes (each with 8 GPUs):
<syntaxhighlight lang="text">
compute-permanent-node-68
compute-permanent-node-285
compute-permanent-node-493
compute-permanent-node-625
compute-permanent-node-626
compute-permanent-node-749
compute-permanent-node-801
compute-permanent-node-580
</syntaxhighlight>
When you ssh-in, you log in to the bastion node pure-caribou-bastion from which you can log in to any other node where you can test your code.
== Reserving a GPU ==
Here is a script you can use for getting an interactive node through Slurm.
<syntaxhighlight lang="bash">
gpunode () {
local job_id=$(squeue -u $USER -h -t R -o %i -n gpunode)
if [[ -n $job_id ]]
then
echo "Attaching to job ID $job_id"
srun --jobid=$job_id --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --pty $SLURM_XPUNODE_SHELL
return 0
fi
echo "Creating new job"
srun --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --interactive --job-name=gpunode --pty $SLURM_XPUNODE_SHELL
}
</syntaxhighlight>
[[Category:K-Scale]]
7ce1b1cb7f0cf12db1938ae0f41fdf5f6c32b895
809
808
2024-05-03T16:56:01Z
Ben
2
wikitext
text/x-wiki
The K-Scale Labs clusters are shared cluster for robotics research. This page contains notes on how to access the clusters.
== Onboarding ==
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config</code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
After setting this up, you can use the command <code>ssh cluster</code> to directly connect.
You can also access via VS Code. Tutorial of using <code>ssh</code> in VS Code is [https://code.visualstudio.com/docs/remote/ssh-tutorial here].
Please inform us if you have any issues!
=== Notes ===
* You may need to restart <code>ssh</code> to get it working.
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
=== Cluster 1 ===
=== Cluster 2 ===
The cluster has 8 available nodes (each with 8 GPUs):
<syntaxhighlight lang="text">
compute-permanent-node-68
compute-permanent-node-285
compute-permanent-node-493
compute-permanent-node-625
compute-permanent-node-626
compute-permanent-node-749
compute-permanent-node-801
compute-permanent-node-580
</syntaxhighlight>
When you ssh-in, you log in to the bastion node pure-caribou-bastion from which you can log in to any other node where you can test your code.
== Reserving a GPU ==
Here is a script you can use for getting an interactive node through Slurm.
<syntaxhighlight lang="bash">
gpunode () {
local job_id=$(squeue -u $USER -h -t R -o %i -n gpunode)
if [[ -n $job_id ]]
then
echo "Attaching to job ID $job_id"
srun --jobid=$job_id --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --pty $SLURM_XPUNODE_SHELL
return 0
fi
echo "Creating new job"
srun --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --interactive --job-name=gpunode --pty $SLURM_XPUNODE_SHELL
}
</syntaxhighlight>
[[Category:K-Scale]]
17872a9f6b0bb2914c7b58f28fedb491a1f1c5c9
Mirsee Robotics
0
194
811
2024-05-03T17:15:10Z
129.97.185.73
0
Created page with "[https://mirsee.com/ Mirsee Robotics] is a robotics company based in Cambridge, ON, Canada. Along with custom actuators, hands, and other solutions, they have developed two hu..."
wikitext
text/x-wiki
[https://mirsee.com/ Mirsee Robotics] is a robotics company based in Cambridge, ON, Canada. Along with custom actuators, hands, and other solutions, they have developed two humanoid robots, [[Beomni]] and [[Mirsee]], both of which have wheeled bases.
{{infobox company
| name = Mirsee Robotics
| country = Canada
| website_link = https://mirsee.com/
| robots = [[Beomni]], [[Mirsee]]
}}
[[Category:Companies]]
310d7ecb7bac430f83342c26498b2696e3d3c03c
Beomni
0
195
812
2024-05-03T17:20:54Z
129.97.185.73
0
Created page with "Mirsee is a humanoid robot from [[Mirsee Robotics]]. {{infobox robot | name = Mirsee | organization = [[Mirsee Robotics]] | height | weight | single_hand_payload | two_hand_p..."
wikitext
text/x-wiki
Mirsee is a humanoid robot from [[Mirsee Robotics]].
{{infobox robot
| name = Mirsee
| organization = [[Mirsee Robotics]]
| height
| weight
| single_hand_payload
| two_hand_payload = 60 lbs
| video_link = https://www.youtube.com/watch?v=mbHCuPBsIgg
| cost
}}
[[Category:Robots]]
f444cf6316711d299d16989f7c5145080092a322
814
812
2024-05-03T17:23:36Z
129.97.185.73
0
wikitext
text/x-wiki
Beomni is a humanoid robot developed by [[Mirsee Robotics]] for the Beyond Imagination AI company: https://www.beomni.ai/.
{{infobox robot
| name = Beomni
| organization = [[Mirsee Robotics]]
| height
| weight
| single_hand_payload
| two_hand_payload
| video_link = https://www.youtube.com/watch?v=chIFBTHyEaE
| cost
}}
[[Category:Robots]]
eeafd4009e739d39500ceabcdb10b715d40b9b80
Mirsee
0
196
813
2024-05-03T17:21:03Z
129.97.185.73
0
Created page with "Mirsee is a humanoid robot from [[Mirsee Robotics]]. {{infobox robot | name = Mirsee | organization = [[Mirsee Robotics]] | height | weight | single_hand_payload | two_hand_p..."
wikitext
text/x-wiki
Mirsee is a humanoid robot from [[Mirsee Robotics]].
{{infobox robot
| name = Mirsee
| organization = [[Mirsee Robotics]]
| height
| weight
| single_hand_payload
| two_hand_payload = 60 lbs
| video_link = https://www.youtube.com/watch?v=mbHCuPBsIgg
| cost
}}
[[Category:Robots]]
f444cf6316711d299d16989f7c5145080092a322
DEEP Robotics J60
0
197
816
2024-05-03T17:40:05Z
129.97.185.73
0
Created page with "The [https://www.deeprobotics.cn/en/index/j60.html J60] by [[DEEP Robotics]] is a family of high-performance actuators designed for use in quadruped and humanoid robots. == O..."
wikitext
text/x-wiki
The [https://www.deeprobotics.cn/en/index/j60.html J60] by [[DEEP Robotics]] is a family of high-performance actuators designed for use in quadruped and humanoid robots.
== Overview ==
The J60 actuators are high-performance actuators have been specifically designed for use in humanoid robots and in [[DEEP Robotics]] quadrupeds. These actuators are reliable and durable, featuring a high torque-to-weight ratio. The joints include a gear reduction, a frameless torque motor, a servo driver, and an absolute value encoder into one compact unit.
=== Actuators ===
{{infobox actuator
| name = J60-6
| manufacturer = DEEP Robotics
| link = https://www.deeprobotics.cn/en/index/j60.html
| peak_torque = 19.94 Nm
| peak_speed = 24.18 rad/s
| dimensions = 76.5mm diameter, 63mm length
| weight = 480g
| absolute_encoder_resolution = 14 bit
| operating_voltage_range = 12-36V
| standard_operating_voltage = 24V
| interface = CAN bus / RS485
| control_frequency = 1kHz
}}
{{infobox actuator
| name = J60-10
| manufacturer = DEEP Robotics
| link = https://www.deeprobotics.cn/en/index/j60.html
| peak_torque = 30.50 Nm
| peak_speed = 15.49 rad/s
| dimensions = 76.5mm diameter, 72.5mm length
| weight = 540g
| absolute_encoder_resolution = 14 bit
| operating_voltage_range = 12-36V
| standard_operating_voltage = 24V
| interface = CAN bus / RS485
| control_frequency = 1kHz
}}
[[Category: Actuators]]
b5cdee73845d295957f9422b4ae2f30130be176c
DEEP Robotics
0
198
818
2024-05-03T17:45:18Z
129.97.185.73
0
Created page with "[https://www.deeprobotics.cn/en/ DEEP Robotics] is a robotics company based in Hangzhou, Zhejiang Province, China. They make research-grade and industrial-level quadrupedal ro..."
wikitext
text/x-wiki
[https://www.deeprobotics.cn/en/ DEEP Robotics] is a robotics company based in Hangzhou, Zhejiang Province, China. They make research-grade and industrial-level quadrupedal robots, such as the [https://www.deeprobotics.cn/en/index/product1.html Lite 3] and the [https://www.deeprobotics.cn/en/index/product3.html X30], as well as high-performance actuators for humanoid and quadruped robots (the [[J60]] series).
{{infobox company
| name = DEEP Robotics
| country = China
| website_link = https://www.deeprobotics.cn/en/
}}
[[Category:Companies]]
28ba58276da9ea11e9bd04fc9f44af5e22160df4
Getting Started with Humanoid Robots
0
193
832
831
2024-05-03T21:35:06Z
Vrtnis
21
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [SPIN project](https://github.com/atopile/spin-servo-drive) by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual before Physical ====
Use ISAAC Sim to test your designs under various simulated conditions to refine your robot's mechanics and electronics.
==== Real-World Testing ====
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
353d7921ff47c5ccd6125524a4bfa2ae44d78ef0
843
832
2024-05-04T20:00:14Z
Vrtnis
21
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [SPIN project](https://github.com/atopile/spin-servo-drive) by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
- MuJoCo (MJX): Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with.
- VSim: Claims to be 10x faster than other simulators.
- ManiSkill/Sapien: Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
== Best Practices for Virtual Testing ==
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
- Use Azure's A100 GPUs for Isaac training.
- Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
==== Real-World Testing ====
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
c9cda6bafed17dbebdb2d645d276f30f8cb0067b
844
843
2024-05-04T20:09:14Z
Vrtnis
21
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [SPIN project](https://github.com/atopile/spin-servo-drive) by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ====== Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
== Best Practices for Virtual Testing ==
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
==== Real-World Testing ====
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
4c1720c62fabb5ec7632e59e6f72da89ab422efa
845
844
2024-05-04T20:12:48Z
Vrtnis
21
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [SPIN project](https://github.com/atopile/spin-servo-drive) by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ====== Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
16cacb5853202dbd117a26819f958e48729f7a5b
846
845
2024-05-04T20:13:18Z
Vrtnis
21
/* VSim */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [SPIN project](https://github.com/atopile/spin-servo-drive) by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
0b889dab046d6fc518b669a5b02ff4dace2f045a
847
846
2024-05-04T20:14:53Z
Vrtnis
21
/* Best Practices for Virtual Testing */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [SPIN project](https://github.com/atopile/spin-servo-drive) by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
2622dcd3902c600f037cd27d18576559f7e9e046
848
847
2024-05-04T20:32:15Z
Vrtnis
21
/* SPIN: A Revolutionary Servo Project */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
2b1fa6ab0fdc5962a535197892ca7e3459ba2b91
849
848
2024-05-04T21:35:46Z
Vrtnis
21
/* Planetary and Cycloidal Gear Actuators */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control. MyActuator (just one option) offers a variety of actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
83b4f61e0fd355379385981fb1873046311345a2
850
849
2024-05-04T21:36:17Z
Vrtnis
21
/* Planetary and Cycloidal Gear Actuators */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control. M
yActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
dbba943ce680c422b74ab57a8900f30bc33ebc57
851
850
2024-05-04T21:36:41Z
Vrtnis
21
/* Planetary and Cycloidal Gear Actuators */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
2a121f652bc54716a1fe52ea1ac81aa90460f6ea
852
851
2024-05-04T21:49:25Z
Vrtnis
21
/* Actuator Types and Design Inspirations */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
788c00bf689b67edc845a0e1a0ca56f7b5a65423
854
852
2024-05-05T06:40:29Z
Vrtnis
21
/* ROS (Robot Operating System) */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
ce31111d3fee041997fb0cd72a675bb976e27efd
855
854
2024-05-05T18:43:18Z
Vrtnis
21
/* MIT Cheetah Actuator */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. In short, if you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
c2a41f09e5d28f4ff1c775e2152e4de0b900b86c
856
855
2024-05-05T18:43:43Z
Vrtnis
21
/* MIT Cheetah Actuator */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. If you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
c68c04e769f5357c86f55b1b1798141fb5594940
857
856
2024-05-05T18:45:01Z
Vrtnis
21
/* MIT Cheetah Actuator */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. If you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
Here is the [https://fab.cba.mit.edu/classes/865.18/motion/papers/mit-cheetah-actuator.pdf MIT research paper] if you are interested in a deeper dive
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
bae0cba8c9b7950a9221c08098444ad4995f30a8
858
857
2024-05-05T18:45:20Z
Vrtnis
21
/* MIT Cheetah Actuator */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. If you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
Here is the [https://fab.cba.mit.edu/classes/865.18/motion/papers/mit-cheetah-actuator.pdf MIT research paper] if you are interested in a deeper dive.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
743407e54cf5ff832358cf51ce76da1a666a1e35
859
858
2024-05-05T19:03:43Z
Vrtnis
21
/* Comprehensive Actuator Comparisons */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. If you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
Here is the [https://fab.cba.mit.edu/classes/865.18/motion/papers/mit-cheetah-actuator.pdf MIT research paper] if you are interested in a deeper dive.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The humanoid robotics community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
There are discussions about custom actuator developments tailored for specific applications. For example, discussions from [Iris Dynamics on electric linear actuators](https://irisdynamics.com/products/orca-series) suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
9019a2ef9524c334459c6cbc44cfccefbc1e44a1
860
859
2024-05-05T19:04:23Z
Vrtnis
21
/* Custom Actuator Developments */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. If you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
Here is the [https://fab.cba.mit.edu/classes/865.18/motion/papers/mit-cheetah-actuator.pdf MIT research paper] if you are interested in a deeper dive.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The humanoid robotics community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
[https://irisdynamics.com/products/orca-series Iris Dynamics on electric linear actuators] suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
5b6e66ccd4c3e3c5d6149cceba71eb1ee9bf4460
861
860
2024-05-05T19:04:42Z
Vrtnis
21
/* Custom Actuator Developments */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. If you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
Here is the [https://fab.cba.mit.edu/classes/865.18/motion/papers/mit-cheetah-actuator.pdf MIT research paper] if you are interested in a deeper dive.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The humanoid robotics community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
[https://irisdynamics.com/products/orca-series Iris Dynamics electric linear actuators] suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
fb24b5b43ce639d32511f0282c5cdd102cb68740
862
861
2024-05-05T19:11:08Z
Vrtnis
21
/* MuJoCo (MJX) */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. If you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
Here is the [https://fab.cba.mit.edu/classes/865.18/motion/papers/mit-cheetah-actuator.pdf MIT research paper] if you are interested in a deeper dive.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The humanoid robotics community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
[https://irisdynamics.com/products/orca-series Iris Dynamics electric linear actuators] suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with. The MuJoCo_MPC repository, created by Google DeepMind, is a toolset that combines Model Predictive Control (MPC) with the MuJoCo physics engine to create real-time behavior synthesis. With the advanced MJX extension, which uses GPU acceleration, it can simulate multiple environments in parallel. One approach is to try to replicate the techniques detailed in the AMP (Adversarial Motion Priors) paper to achieve agile humanoid behavior. For example, implementing a humanoid get-up sequence, which matches what was described in the AMP research.
There’s been collaboration between different projects, like Stompy, to get humanoid simulations up and running. An initial conversion was done for Gymnasium to handle the URDF (Universal Robot Description Format) file format. Although converting to MJCF (MuJoCo's XML-based format) may present some challenges, we can still get it to work and refine the motor and actuator setup.
Although MuJoCo can be slower in single-environment simulations, the MJX extension and its parallel processing potential make it a solid competitor.
Compared to enviroments, like NVIDIA's Isaac Gym, MuJoCo might stand out for its extensibility and rapid development. One goal could be to try recreate the walking, running, and getting-up behaviors described in the AMP paper and use them as a foundation for training robust humanoid movements in simulation.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
e9fbe284dd5453c5748313d951368ab351fb5e2e
863
862
2024-05-05T19:11:52Z
Vrtnis
21
/* MuJoCo (MJX) */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete; you can help by expanding it!
'''update:''work in progress - starting with a template, plan to expand on sections :)'''''
This guide is crafted for enthusiasts who are not just looking to study humanoid robotics but to actually build and experiment with their own robots.
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. If you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
Here is the [https://fab.cba.mit.edu/classes/865.18/motion/papers/mit-cheetah-actuator.pdf MIT research paper] if you are interested in a deeper dive.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The humanoid robotics community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
[https://irisdynamics.com/products/orca-series Iris Dynamics electric linear actuators] suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with. The MuJoCo_MPC repository, created by Google DeepMind, is a toolset that combines Model Predictive Control (MPC) with the MuJoCo physics engine to create real-time behavior synthesis. With the advanced MJX extension, which uses GPU acceleration, it can simulate multiple environments in parallel. One approach is to try to replicate the techniques detailed in the AMP (Adversarial Motion Priors) paper to achieve agile humanoid behavior. For example, implementing a humanoid get-up sequence, which matches what was described in the AMP research.
There’s been collaboration between different projects, like Stompy, to get humanoid simulations up and running. You could try converting Gymnasium to handle the URDF (Universal Robot Description Format) file format. Although converting to MJCF (MuJoCo's XML-based format) may present some challenges, we can still get it to work and refine the motor and actuator setup.
Although MuJoCo can be slower in single-environment simulations, the MJX extension and its parallel processing potential make it a solid competitor. Compared to enviroments, like NVIDIA's Isaac Gym, MuJoCo might stand out for its extensibility and rapid development. One goal could be to try recreate the walking, running, and getting-up behaviors described in the AMP paper and use them as a foundation for training robust humanoid movements in simulation.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
7e7338c7af7aaa0435222cc2bd2012fc43ed525f
864
863
2024-05-05T19:27:47Z
Vrtnis
21
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete and a work in progress; you can help by expanding it!
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. If you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
Here is the [https://fab.cba.mit.edu/classes/865.18/motion/papers/mit-cheetah-actuator.pdf MIT research paper] if you are interested in a deeper dive.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The humanoid robotics community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
==== Custom Actuator Developments ====
[https://irisdynamics.com/products/orca-series Iris Dynamics electric linear actuators] suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with. The MuJoCo_MPC repository, created by Google DeepMind, is a toolset that combines Model Predictive Control (MPC) with the MuJoCo physics engine to create real-time behavior synthesis. With the advanced MJX extension, which uses GPU acceleration, it can simulate multiple environments in parallel. One approach is to try to replicate the techniques detailed in the AMP (Adversarial Motion Priors) paper to achieve agile humanoid behavior. For example, implementing a humanoid get-up sequence, which matches what was described in the AMP research.
There’s been collaboration between different projects, like Stompy, to get humanoid simulations up and running. You could try converting Gymnasium to handle the URDF (Universal Robot Description Format) file format. Although converting to MJCF (MuJoCo's XML-based format) may present some challenges, we can still get it to work and refine the motor and actuator setup.
Although MuJoCo can be slower in single-environment simulations, the MJX extension and its parallel processing potential make it a solid competitor. Compared to enviroments, like NVIDIA's Isaac Gym, MuJoCo might stand out for its extensibility and rapid development. One goal could be to try recreate the walking, running, and getting-up behaviors described in the AMP paper and use them as a foundation for training robust humanoid movements in simulation.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
9f8acc36732dec1fea191c0bb0be97e41a224cbd
876
864
2024-05-06T06:18:19Z
Vrtnis
21
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete and a work in progress; you can help by expanding it!
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. If you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
Here is the [https://fab.cba.mit.edu/classes/865.18/motion/papers/mit-cheetah-actuator.pdf MIT research paper] if you are interested in a deeper dive.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The humanoid robotics community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Custom Actuator Developments ====
[https://irisdynamics.com/products/orca-series Iris Dynamics electric linear actuators] suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with. The MuJoCo_MPC repository, created by Google DeepMind, is a toolset that combines Model Predictive Control (MPC) with the MuJoCo physics engine to create real-time behavior synthesis. With the advanced MJX extension, which uses GPU acceleration, it can simulate multiple environments in parallel. One approach is to try to replicate the techniques detailed in the AMP (Adversarial Motion Priors) paper to achieve agile humanoid behavior. For example, implementing a humanoid get-up sequence, which matches what was described in the AMP research.
There’s been collaboration between different projects, like Stompy, to get humanoid simulations up and running. You could try converting Gymnasium to handle the URDF (Universal Robot Description Format) file format. Although converting to MJCF (MuJoCo's XML-based format) may present some challenges, we can still get it to work and refine the motor and actuator setup.
Although MuJoCo can be slower in single-environment simulations, the MJX extension and its parallel processing potential make it a solid competitor. Compared to enviroments, like NVIDIA's Isaac Gym, MuJoCo might stand out for its extensibility and rapid development. One goal could be to try recreate the walking, running, and getting-up behaviors described in the AMP paper and use them as a foundation for training robust humanoid movements in simulation.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
60e633e04ad9c6dc71f614b955a04484ee964df7
877
876
2024-05-06T06:25:01Z
Vrtnis
21
/* Series Elastic and Quasi-Direct Drive Actuators */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete and a work in progress; you can help by expanding it!
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
Some points to consider:
* Pick Your Springs Wisely: The springs in SEAs are where the magic happens. Choosing the right stiffness is a balancing act between precise torque control and getting sluggish responses.
*
* Calibrate, Calibrate, Calibrate: Calibration is your best buddy. Since the spring is constantly flexing, you need sensors tuned to give accurate torque measurements. Do it regularly, and you'll keep your movements smooth and predictable.
*
* Control Loops FTW: You want finely-tuned control loops to make SEAs shine. A high-frequency loop can make your robot more agile in handling external forces. Go for PID controllers perhaps.
*
* Friction: Friction can impact your torque control big time, especially in gearboxes and linkages. Low-friction components will help keep everything moving smoothly.
*
* Load Path Matters: Make sure your spring is right in the direct load path between the actuator and the joint. If not, your robot won’t get the full benefit of force sensing.
*
* Watch Out for Fatigue: Springs aren’t indestructible. With a lot of high-impact activities, they can wear out. Keep an eye on them to avoid breakdowns at the worst possible moment.
*
* Get Your Software Right: SEAs thrive on real-time feedback. Ensure your software can handle data fast enough, maybe even use a real-time operating system or optimized signal processing.
*
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. If you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
Here is the [https://fab.cba.mit.edu/classes/865.18/motion/papers/mit-cheetah-actuator.pdf MIT research paper] if you are interested in a deeper dive.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The humanoid robotics community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Custom Actuator Developments ====
[https://irisdynamics.com/products/orca-series Iris Dynamics electric linear actuators] suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with. The MuJoCo_MPC repository, created by Google DeepMind, is a toolset that combines Model Predictive Control (MPC) with the MuJoCo physics engine to create real-time behavior synthesis. With the advanced MJX extension, which uses GPU acceleration, it can simulate multiple environments in parallel. One approach is to try to replicate the techniques detailed in the AMP (Adversarial Motion Priors) paper to achieve agile humanoid behavior. For example, implementing a humanoid get-up sequence, which matches what was described in the AMP research.
There’s been collaboration between different projects, like Stompy, to get humanoid simulations up and running. You could try converting Gymnasium to handle the URDF (Universal Robot Description Format) file format. Although converting to MJCF (MuJoCo's XML-based format) may present some challenges, we can still get it to work and refine the motor and actuator setup.
Although MuJoCo can be slower in single-environment simulations, the MJX extension and its parallel processing potential make it a solid competitor. Compared to enviroments, like NVIDIA's Isaac Gym, MuJoCo might stand out for its extensibility and rapid development. One goal could be to try recreate the walking, running, and getting-up behaviors described in the AMP paper and use them as a foundation for training robust humanoid movements in simulation.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
b34c16bcd9ca60f69f38061a50e745d2451d6951
878
877
2024-05-06T06:26:37Z
Vrtnis
21
/* Series Elastic and Quasi-Direct Drive Actuators */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete and a work in progress; you can help by expanding it!
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
Some things to consider:
The springs in SEAs are where the magic happens. Choosing the right stiffness is a balancing act between getting precise torque control and avoiding sluggish responses. Finding that sweet spot is key.
Calibration is your best buddy. Since the spring is constantly flexing, you need sensors tuned to give accurate torque measurements. Do it regularly, and you'll keep those movements smooth and predictable.
You want finely-tuned control loops to make SEAs shine. A high-frequency loop can make your robot more agile in handling external forces. PID controllers are a solid starting point, or you can try out some advanced strategies.
Friction can really impact your torque control, especially in gearboxes and linkages. Using low-friction components and proper lubrication will help keep everything moving smoothly.
Make sure your spring is positioned directly between the actuator and the joint. If not, your robot won’t get the full benefit of force sensing, and that precision will be lost.
Springs aren’t indestructible. If your robot is doing a lot of high-impact activities, the springs can wear out. Keep an eye on them to avoid breakdowns when you least expect them.
SEAs thrive on real-time feedback. Ensure your software can handle data quickly, maybe using a real-time operating system or optimized signal processing.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. If you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
Here is the [https://fab.cba.mit.edu/classes/865.18/motion/papers/mit-cheetah-actuator.pdf MIT research paper] if you are interested in a deeper dive.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The humanoid robotics community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Custom Actuator Developments ====
[https://irisdynamics.com/products/orca-series Iris Dynamics electric linear actuators] suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with. The MuJoCo_MPC repository, created by Google DeepMind, is a toolset that combines Model Predictive Control (MPC) with the MuJoCo physics engine to create real-time behavior synthesis. With the advanced MJX extension, which uses GPU acceleration, it can simulate multiple environments in parallel. One approach is to try to replicate the techniques detailed in the AMP (Adversarial Motion Priors) paper to achieve agile humanoid behavior. For example, implementing a humanoid get-up sequence, which matches what was described in the AMP research.
There’s been collaboration between different projects, like Stompy, to get humanoid simulations up and running. You could try converting Gymnasium to handle the URDF (Universal Robot Description Format) file format. Although converting to MJCF (MuJoCo's XML-based format) may present some challenges, we can still get it to work and refine the motor and actuator setup.
Although MuJoCo can be slower in single-environment simulations, the MJX extension and its parallel processing potential make it a solid competitor. Compared to enviroments, like NVIDIA's Isaac Gym, MuJoCo might stand out for its extensibility and rapid development. One goal could be to try recreate the walking, running, and getting-up behaviors described in the AMP paper and use them as a foundation for training robust humanoid movements in simulation.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
ff070237df72a4ef35fe2813437568acecd49cb5
879
878
2024-05-06T06:26:57Z
Vrtnis
21
/* Series Elastic and Quasi-Direct Drive Actuators */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete and a work in progress; you can help by expanding it!
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
Some things to consider:
The springs in SEAs are where the magic happens. Choosing the right stiffness is a balancing act between getting precise torque control and avoiding sluggish responses.
Calibration is your best buddy. Since the spring is constantly flexing, you need sensors tuned to give accurate torque measurements. Do it regularly, and you'll keep those movements smooth and predictable.
You want finely-tuned control loops to make SEAs shine. A high-frequency loop can make your robot more agile in handling external forces. PID controllers are a solid starting point, or you can try out some advanced strategies.
Friction can really impact your torque control, especially in gearboxes and linkages. Using low-friction components and proper lubrication will help keep everything moving smoothly.
Make sure your spring is positioned directly between the actuator and the joint. If not, your robot won’t get the full benefit of force sensing, and that precision will be lost.
Springs aren’t indestructible. If your robot is doing a lot of high-impact activities, the springs can wear out. Keep an eye on them to avoid breakdowns when you least expect them.
SEAs thrive on real-time feedback. Ensure your software can handle data quickly, maybe using a real-time operating system or optimized signal processing.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. If you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
Here is the [https://fab.cba.mit.edu/classes/865.18/motion/papers/mit-cheetah-actuator.pdf MIT research paper] if you are interested in a deeper dive.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The humanoid robotics community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Custom Actuator Developments ====
[https://irisdynamics.com/products/orca-series Iris Dynamics electric linear actuators] suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with. The MuJoCo_MPC repository, created by Google DeepMind, is a toolset that combines Model Predictive Control (MPC) with the MuJoCo physics engine to create real-time behavior synthesis. With the advanced MJX extension, which uses GPU acceleration, it can simulate multiple environments in parallel. One approach is to try to replicate the techniques detailed in the AMP (Adversarial Motion Priors) paper to achieve agile humanoid behavior. For example, implementing a humanoid get-up sequence, which matches what was described in the AMP research.
There’s been collaboration between different projects, like Stompy, to get humanoid simulations up and running. You could try converting Gymnasium to handle the URDF (Universal Robot Description Format) file format. Although converting to MJCF (MuJoCo's XML-based format) may present some challenges, we can still get it to work and refine the motor and actuator setup.
Although MuJoCo can be slower in single-environment simulations, the MJX extension and its parallel processing potential make it a solid competitor. Compared to enviroments, like NVIDIA's Isaac Gym, MuJoCo might stand out for its extensibility and rapid development. One goal could be to try recreate the walking, running, and getting-up behaviors described in the AMP paper and use them as a foundation for training robust humanoid movements in simulation.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
b169e5ad51bd9d40c6a8fb37d35ccb54cb397c87
880
879
2024-05-06T06:35:34Z
Vrtnis
21
/* Series Elastic and Quasi-Direct Drive Actuators */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete and a work in progress; you can help by expanding it!
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
Some things to consider:
The springs in SEAs are where the magic happens. Choosing the right stiffness is a balancing act between getting precise torque control and avoiding sluggish responses. Since the spring is constantly flexing, you need sensors tuned to give accurate torque measurements. Do it regularly, and you'll keep those movements smooth and predictable. You want finely-tuned control loops to make SEAs shine. A high-frequency loop can make your robot more agile in handling external forces. PID controllers are a solid starting point, or you can try out some advanced strategies.
Friction can really impact your torque control, especially in gearboxes and linkages. Using low-friction components and proper lubrication will help keep everything moving smoothly. Make sure your spring is positioned directly between the actuator and the joint. If not, your robot won’t get the full benefit of force sensing, and that precision will be lost.If your robot is doing a lot of high-impact activities, the springs can wear out. Keep an eye on them to avoid breakdowns when you least expect them. SEAs thrive on real-time feedback. Ensure your software can handle data quickly, maybe using a real-time operating system or optimized signal processing.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. If you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
Here is the [https://fab.cba.mit.edu/classes/865.18/motion/papers/mit-cheetah-actuator.pdf MIT research paper] if you are interested in a deeper dive.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The humanoid robotics community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Custom Actuator Developments ====
[https://irisdynamics.com/products/orca-series Iris Dynamics electric linear actuators] suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with. The MuJoCo_MPC repository, created by Google DeepMind, is a toolset that combines Model Predictive Control (MPC) with the MuJoCo physics engine to create real-time behavior synthesis. With the advanced MJX extension, which uses GPU acceleration, it can simulate multiple environments in parallel. One approach is to try to replicate the techniques detailed in the AMP (Adversarial Motion Priors) paper to achieve agile humanoid behavior. For example, implementing a humanoid get-up sequence, which matches what was described in the AMP research.
There’s been collaboration between different projects, like Stompy, to get humanoid simulations up and running. You could try converting Gymnasium to handle the URDF (Universal Robot Description Format) file format. Although converting to MJCF (MuJoCo's XML-based format) may present some challenges, we can still get it to work and refine the motor and actuator setup.
Although MuJoCo can be slower in single-environment simulations, the MJX extension and its parallel processing potential make it a solid competitor. Compared to enviroments, like NVIDIA's Isaac Gym, MuJoCo might stand out for its extensibility and rapid development. One goal could be to try recreate the walking, running, and getting-up behaviors described in the AMP paper and use them as a foundation for training robust humanoid movements in simulation.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to or start your own open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
5618194ecfcd1e82f4ee4e8138a05febba093427
881
880
2024-05-06T17:43:44Z
Vrtnis
21
/* Open Source Projects */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete and a work in progress; you can help by expanding it!
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the MIT Cheetah actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
Some things to consider:
The springs in SEAs are where the magic happens. Choosing the right stiffness is a balancing act between getting precise torque control and avoiding sluggish responses. Since the spring is constantly flexing, you need sensors tuned to give accurate torque measurements. Do it regularly, and you'll keep those movements smooth and predictable. You want finely-tuned control loops to make SEAs shine. A high-frequency loop can make your robot more agile in handling external forces. PID controllers are a solid starting point, or you can try out some advanced strategies.
Friction can really impact your torque control, especially in gearboxes and linkages. Using low-friction components and proper lubrication will help keep everything moving smoothly. Make sure your spring is positioned directly between the actuator and the joint. If not, your robot won’t get the full benefit of force sensing, and that precision will be lost.If your robot is doing a lot of high-impact activities, the springs can wear out. Keep an eye on them to avoid breakdowns when you least expect them. SEAs thrive on real-time feedback. Ensure your software can handle data quickly, maybe using a real-time operating system or optimized signal processing.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. If you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
Here is the [https://fab.cba.mit.edu/classes/865.18/motion/papers/mit-cheetah-actuator.pdf MIT research paper] if you are interested in a deeper dive.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The humanoid robotics community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Custom Actuator Developments ====
[https://irisdynamics.com/products/orca-series Iris Dynamics electric linear actuators] suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with. The MuJoCo_MPC repository, created by Google DeepMind, is a toolset that combines Model Predictive Control (MPC) with the MuJoCo physics engine to create real-time behavior synthesis. With the advanced MJX extension, which uses GPU acceleration, it can simulate multiple environments in parallel. One approach is to try to replicate the techniques detailed in the AMP (Adversarial Motion Priors) paper to achieve agile humanoid behavior. For example, implementing a humanoid get-up sequence, which matches what was described in the AMP research.
There’s been collaboration between different projects, like Stompy, to get humanoid simulations up and running. You could try converting Gymnasium to handle the URDF (Universal Robot Description Format) file format. Although converting to MJCF (MuJoCo's XML-based format) may present some challenges, we can still get it to work and refine the motor and actuator setup.
Although MuJoCo can be slower in single-environment simulations, the MJX extension and its parallel processing potential make it a solid competitor. Compared to enviroments, like NVIDIA's Isaac Gym, MuJoCo might stand out for its extensibility and rapid development. One goal could be to try recreate the walking, running, and getting-up behaviors described in the AMP paper and use them as a foundation for training robust humanoid movements in simulation.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to an open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs. Check out [https://humanoids.wiki/w/Stompy Stompy!]
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
6737df27ab99200f97333623c586542b4ccc883c
882
881
2024-05-06T17:47:18Z
Vrtnis
21
/* Building Your Humanoid Robot */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete and a work in progress; you can help by expanding it!
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the [https://humanoids.wiki/w/MIT_Cheetah MIT Cheetah] actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
Some things to consider:
The springs in SEAs are where the magic happens. Choosing the right stiffness is a balancing act between getting precise torque control and avoiding sluggish responses. Since the spring is constantly flexing, you need sensors tuned to give accurate torque measurements. Do it regularly, and you'll keep those movements smooth and predictable. You want finely-tuned control loops to make SEAs shine. A high-frequency loop can make your robot more agile in handling external forces. PID controllers are a solid starting point, or you can try out some advanced strategies.
Friction can really impact your torque control, especially in gearboxes and linkages. Using low-friction components and proper lubrication will help keep everything moving smoothly. Make sure your spring is positioned directly between the actuator and the joint. If not, your robot won’t get the full benefit of force sensing, and that precision will be lost.If your robot is doing a lot of high-impact activities, the springs can wear out. Keep an eye on them to avoid breakdowns when you least expect them. SEAs thrive on real-time feedback. Ensure your software can handle data quickly, maybe using a real-time operating system or optimized signal processing.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. If you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
Here is the [https://fab.cba.mit.edu/classes/865.18/motion/papers/mit-cheetah-actuator.pdf MIT research paper] if you are interested in a deeper dive.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The humanoid robotics community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Custom Actuator Developments ====
[https://irisdynamics.com/products/orca-series Iris Dynamics electric linear actuators] suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with. The MuJoCo_MPC repository, created by Google DeepMind, is a toolset that combines Model Predictive Control (MPC) with the MuJoCo physics engine to create real-time behavior synthesis. With the advanced MJX extension, which uses GPU acceleration, it can simulate multiple environments in parallel. One approach is to try to replicate the techniques detailed in the AMP (Adversarial Motion Priors) paper to achieve agile humanoid behavior. For example, implementing a humanoid get-up sequence, which matches what was described in the AMP research.
There’s been collaboration between different projects, like Stompy, to get humanoid simulations up and running. You could try converting Gymnasium to handle the URDF (Universal Robot Description Format) file format. Although converting to MJCF (MuJoCo's XML-based format) may present some challenges, we can still get it to work and refine the motor and actuator setup.
Although MuJoCo can be slower in single-environment simulations, the MJX extension and its parallel processing potential make it a solid competitor. Compared to enviroments, like NVIDIA's Isaac Gym, MuJoCo might stand out for its extensibility and rapid development. One goal could be to try recreate the walking, running, and getting-up behaviors described in the AMP paper and use them as a foundation for training robust humanoid movements in simulation.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to an open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs. Check out [https://humanoids.wiki/w/Stompy Stompy!]
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
01ba2abedbfdd94f1174f9a5ddd33fc34fefc07b
Main Page
0
1
833
819
2024-05-03T23:41:43Z
Modeless
7
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
|
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|}
fc3f5167ec8e3dfd7915f0824bcf6e4207c067e5
871
833
2024-05-06T03:51:49Z
Dymaxion
22
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
|
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[NASA Johnson Space Center]]
| [[Robonaut2]]
|}
90d35249297815078c5ad24b12effa6f0079a7e3
873
871
2024-05-06T04:10:32Z
Dymaxion
22
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
|
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[NASA]]
| [[Robonaut2]]
|}
8fb84ca6e653bf36575a46741ef9a945f4acda50
875
873
2024-05-06T05:34:57Z
Ben
2
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
|
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|}
b08def286007fc5165e8a0e5c8ee9c4155b5a743
SUPCON
0
199
834
2024-05-03T23:44:29Z
Modeless
7
Created page with "SUPCON is a Chinese company that established a lab called the "Zhejiang Humanoid Robot Innovation Center". They have a humanoid called [[Navigator α]].<ref>https://www.therob..."
wikitext
text/x-wiki
SUPCON is a Chinese company that established a lab called the "Zhejiang Humanoid Robot Innovation Center". They have a humanoid called [[Navigator α]].<ref>https://www.therobotreport.com/supcon-opens-innovation-center-launches-navigator-%CE%B1-humanoid-robot/</ref><ref>https://global.supcon.com/news/info/817.html</ref>
a8af0823909865bac31302c82052128b5e5c9946
K-Scale Labs
0
5
835
720
2024-05-04T19:00:04Z
Ben
2
wikitext
text/x-wiki
[[File:Logo.png|right|200px|thumb|The K-Scale Labs logo.]]
[https://kscale.dev/ K-Scale Labs] is building an open-source humanoid robot called [[Stompy]].
{{infobox company
| name = K-Scale Labs
| country = United States
| website_link = https://kscale.dev/
| robots = [[Stompy]]
}}
[[Category:Companies]]
[[Category:K-Scale]]
6667079512b772b6d5a7953bcc80de0e96066370
840
835
2024-05-04T19:13:53Z
Ben
2
wikitext
text/x-wiki
[[File:Logo.png|right|200px|thumb|The K-Scale Labs logo.]]
[https://kscale.dev/ K-Scale Labs] is building an open-source humanoid robot called [[Stompy]].
{{infobox company
| name = K-Scale Labs
| country = United States
| website_link = https://kscale.dev/
| robots = [[Stompy]]
}}
== Logos ==
Here are some other versions of the K-Scale Labs logo.
<gallery>
Logo.png
K-Scale Labs Raw Logo.png
K-Scale Padded Logo.png
K-Scale Raw Padded Logo.png
KScale Raw Transparent Padded Logo.png
</gallery>
=== Adding Padding ===
Here's a helpful command to add padding to an image using ImageMagick:
<syntaxhighlight lang="bash">
convert \
-channel RGB \
-negate \
-background black \
-alpha remove \
-alpha off \
-gravity center \
-scale 384x384 \
-extent 512x512 \
logo.png \
logo_padded.png
</syntaxhighlight>
[[Category:Companies]]
[[Category:K-Scale]]
413302d02e1cabb55d1d4df8fd062bfde6bbfd4c
842
840
2024-05-04T19:24:02Z
Ben
2
wikitext
text/x-wiki
[[File:Logo.png|right|200px|thumb|The K-Scale Labs logo.]]
[https://kscale.dev/ K-Scale Labs] is building an open-source humanoid robot called [[Stompy]].
{{infobox company
| name = K-Scale Labs
| country = United States
| website_link = https://kscale.dev/
| robots = [[Stompy]]
}}
== Logos ==
Here are some other versions of the K-Scale Labs logo.
<gallery>
Logo.png
K-Scale Labs Raw Logo.png
K-Scale Padded Logo.png
K-Scale Raw Padded Logo.png
KScale Raw Transparent Padded Logo.png
K-Scale Raw Padded White Logo.png
</gallery>
=== Adding Padding ===
Here's a helpful command to add padding to an image using ImageMagick:
<syntaxhighlight lang="bash">
convert \
-channel RGB \
-negate \
-background black \
-alpha remove \
-alpha off \
-gravity center \
-scale 384x384 \
-extent 512x512 \
logo.png \
logo_padded.png
</syntaxhighlight>
[[Category:Companies]]
[[Category:K-Scale]]
419fc39dd3c3592d3c48e7db6ce0d6e43e5c6c9b
File:K-Scale Labs Raw Logo.png
6
200
836
2024-05-04T19:02:11Z
Ben
2
wikitext
text/x-wiki
K-Scale Labs Logo without the K
3e249413ed3ef595115c3b5e88dc00c74032bd96
File:K-Scale Padded Logo.png
6
201
837
2024-05-04T19:06:48Z
Ben
2
wikitext
text/x-wiki
K-Scale Padded Logo
0435f8c4095d5512d672fa5c8d9194830f933ead
File:K-Scale Raw Padded Logo.png
6
202
838
2024-05-04T19:07:17Z
Ben
2
wikitext
text/x-wiki
K-Scale Raw Padded Logo
cef7879158402f833b87763f610a160eca284466
File:KScale Raw Transparent Padded Logo.png
6
203
839
2024-05-04T19:13:32Z
Ben
2
wikitext
text/x-wiki
KScale Raw Transparent Padded Logo
d8e27e2c1ef230cec39664f1b563e2948d801aa6
File:K-Scale Raw Padded White Logo.png
6
204
841
2024-05-04T19:23:52Z
Ben
2
wikitext
text/x-wiki
K-Scale Raw Padded White Logo
f56377bd5347514d9f9d6f502da1b23b38efffd6
K-Scale Cluster
0
16
853
809
2024-05-05T00:48:20Z
Jos
17
/* Reserving a GPU */
wikitext
text/x-wiki
The K-Scale Labs clusters are shared cluster for robotics research. This page contains notes on how to access the clusters.
== Onboarding ==
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config</code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
After setting this up, you can use the command <code>ssh cluster</code> to directly connect.
You can also access via VS Code. Tutorial of using <code>ssh</code> in VS Code is [https://code.visualstudio.com/docs/remote/ssh-tutorial here].
Please inform us if you have any issues!
=== Notes ===
* You may need to restart <code>ssh</code> to get it working.
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
=== Cluster 1 ===
=== Cluster 2 ===
The cluster has 8 available nodes (each with 8 GPUs):
<syntaxhighlight lang="text">
compute-permanent-node-68
compute-permanent-node-285
compute-permanent-node-493
compute-permanent-node-625
compute-permanent-node-626
compute-permanent-node-749
compute-permanent-node-801
compute-permanent-node-580
</syntaxhighlight>
When you ssh-in, you log in to the bastion node pure-caribou-bastion from which you can log in to any other node where you can test your code.
== Reserving a GPU ==
Here is a script you can use for getting an interactive node through Slurm.
<syntaxhighlight lang="bash">
gpunode () {
local job_id=$(squeue -u $USER -h -t R -o %i -n gpunode)
if [[ -n $job_id ]]
then
echo "Attaching to job ID $job_id"
srun --jobid=$job_id --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --pty $SLURM_XPUNODE_SHELL
return 0
fi
echo "Creating new job"
srun --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --interactive --job-name=gpunode --pty $SLURM_XPUNODE_SHELL
}
</syntaxhighlight>
Example env vars:
<syntaxhighlight lang="bash">
export SLURM_GPUNODE_PARTITION='compute'
export SLURM_GPUNODE_NUM_GPUS=1
export SLURM_GPUNODE_CPUS_PER_GPU=4
export SLURM_XPUNODE_SHELL='/bin/bash'
</syntaxhighlight>
Integrate the example script into your shell then run `gpunode`.
You can see partition options by running `sinfo`.
You might get an error like this: `groups: cannot find name for group ID 1506`
But things should still run fine. Check with `nvidia-smi`
[[Category:K-Scale]]
554106f928ebdcd704a5bcb31c89d8b23f6dcced
865
853
2024-05-06T02:58:37Z
Ben
2
/* Reserving a GPU */
wikitext
text/x-wiki
The K-Scale Labs clusters are shared cluster for robotics research. This page contains notes on how to access the clusters.
== Onboarding ==
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config</code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
After setting this up, you can use the command <code>ssh cluster</code> to directly connect.
You can also access via VS Code. Tutorial of using <code>ssh</code> in VS Code is [https://code.visualstudio.com/docs/remote/ssh-tutorial here].
Please inform us if you have any issues!
=== Notes ===
* You may need to restart <code>ssh</code> to get it working.
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
=== Cluster 1 ===
=== Cluster 2 ===
The cluster has 8 available nodes (each with 8 GPUs):
<syntaxhighlight lang="text">
compute-permanent-node-68
compute-permanent-node-285
compute-permanent-node-493
compute-permanent-node-625
compute-permanent-node-626
compute-permanent-node-749
compute-permanent-node-801
compute-permanent-node-580
</syntaxhighlight>
When you ssh-in, you log in to the bastion node pure-caribou-bastion from which you can log in to any other node where you can test your code.
== Reserving a GPU ==
Here is a script you can use for getting an interactive node through Slurm.
<syntaxhighlight lang="bash">
gpunode () {
local job_id=$(squeue -u $USER -h -t R -o %i -n gpunode)
if [[ -n $job_id ]]
then
echo "Attaching to job ID $job_id"
srun --jobid=$job_id --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --pty $SLURM_XPUNODE_SHELL
return 0
fi
echo "Creating new job"
srun --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --interactive --job-name=gpunode --pty $SLURM_XPUNODE_SHELL
}
</syntaxhighlight>
Example env vars:
<syntaxhighlight lang="bash">
export SLURM_GPUNODE_PARTITION='compute'
export SLURM_GPUNODE_NUM_GPUS=1
export SLURM_GPUNODE_CPUS_PER_GPU=4
export SLURM_XPUNODE_SHELL='/bin/bash'
</syntaxhighlight>
Integrate the example script into your shell then run <code>gpunode</code>.
You can see partition options by running <code>sinfo</code>.
You might get an error like this: <code>groups: cannot find name for group ID 1506</code>. But things should still run fine. Check with <code>nvidia-smi</code>.
[[Category:K-Scale]]
1551cf14eb37ec1b4afa9dc2e329db2dd4704bcc
User:Dymaxion
2
205
867
2024-05-06T03:39:00Z
Dymaxion
22
Created page with "Kenji Sakaie is an incoming K-Scale intern and mechanical engineering student. {{infobox user | Name = Kenji Sakaie | Organization = K-Scale Labs and Franklin W. Olin College..."
wikitext
text/x-wiki
Kenji Sakaie is an incoming K-Scale intern and mechanical engineering student.
{{infobox user
| Name = Kenji Sakaie
| Organization = K-Scale Labs and Franklin W. Olin College of Engineering
| Title = Intern
}}
31edad4910e4da28e3584ef4e6eba07b36f67ed7
869
867
2024-05-06T03:41:18Z
Dymaxion
22
wikitext
text/x-wiki
Kenji Sakaie is an incoming K-Scale intern and mechanical engineering student.
{{infobox user
}}
2518588e154c7b12509c61807ed63151664f126c
870
869
2024-05-06T03:44:29Z
Dymaxion
22
wikitext
text/x-wiki
Kenji Sakaie is an incoming K-Scale intern and mechanical engineering student.
{{infobox person
| name = Kenji Sakaie
| organization = K-Scale Labs
| title = Technical Intern
}}
[[Category: K-Scale Employees]]
405768c9cfb9d33bd79d7aa27fa89f4386921534
Template:Infobox user
10
206
868
2024-05-06T03:39:25Z
Dymaxion
22
Created page with "{{infobox company | name = Name | country = Country | website_link = https://link.com/ | robots = [[Stompy]] }}"
wikitext
text/x-wiki
{{infobox company
| name = Name
| country = Country
| website_link = https://link.com/
| robots = [[Stompy]]
}}
788ee7706cdb15f6d9fd55449a0959a0d1ceff65
Robonaut2
0
207
872
2024-05-06T04:09:57Z
Dymaxion
22
Created page with "Robonaut2 was the first humanoid robot to be launched into space. It was developed by the NASA Johnson Space Center and launched in 2011, tested about the International Space..."
wikitext
text/x-wiki
Robonaut2 was the first humanoid robot to be launched into space. It was developed by the NASA Johnson Space Center and launched in 2011, tested about the International Space Station, and returned to Earth for study in 2018. Robonaut2 was built to test applications of humanoid robots to augment crewed space missions.
{{infobox robot
| name = Robonaut2
| organization = [[NASA]]
| video_link = https://www.youtube.com/watch?v=mtl48NOtyg0
| cost = $2,500,000
| height = 3' 4"
| weight = 330 lbs
| speed = 7 feet per second
| lift_force = 40 lbs
| number_made = 1
| status = Decommissioned
}}
[[Category:Robots]]
9c361afa851b4daa47e5d9165a1818ae47bcf046
ODrive
0
170
874
715
2024-05-06T04:49:52Z
Ben
2
wikitext
text/x-wiki
'''ODrive''' is a precision motor controller that is highly applicable in robotics domains, CNC machines, and other areas demanding high-performance motor control. It provides accurate control over hobby-grade and industrial electric motors.
{{infobox actuator
| name = ODrive Motor Controller
| manufacturer = ODrive Robotics
| cost =
| purchase_link = https://odriverobotics.com/
| nominal_torque =
| peak_torque =
| weight =
| dimensions =
| gear_ratio =
| voltage =
| cad_link =
| interface =
| gear_type =
}}
== Product Description ==
ODrive is primarily used for the control of electric motors with precision. The device is well-suited for accurate control of both hobby-grade and industrial motors. The controller has been specially designed for applications demanding high performance. The flexibility of the ODrive controller allows it to be used in a variety of applications including, but not limited to, robotics and CNC machines.
== Industrial Application ==
The ODrive motor controller finds its application in numerous sectors, particularly in robotics and CNC machines. This is mainly due to its flexibility, which facilitates control over a variety of motors ranging from hobby-grade to industrial ones. Furthermore, the precision and high performance offered by the controller make it a preferred choice for professionals.
== Technical Specifications ==
While the exact technical specifications of the ODrive motor controller can vary depending on the model, the controller is generally characterized by the following features:
* High performance motor control: Ensures precision and accuracy.
* Flexible interface: Facilitates easy integration with a variety of motors.
* Compatible with various types of electric motors: Ensures wide application.
== References ==
<references />
[[Category:Actuators]]
3ac8e5b196611a7d54a83f47a020a5998001227f
Underactuated Robotics
0
13
883
63
2024-05-06T17:59:10Z
Vrtnis
21
wikitext
text/x-wiki
This is a course taught by Russ Tedrake at MIT.
https://underactuated.csail.mit.edu/
[[:Category:Courses]]
60768f68f812a6c589c274fc03d09dd3780f2c67
885
883
2024-05-06T18:01:11Z
Vrtnis
21
wikitext
text/x-wiki
This is a course taught by Russ Tedrake at MIT.
https://underactuated.csail.mit.edu/
[[Category:Courses]]
e39eec07ba37c96133c098a891395dbe79290bed
Category:Courses
14
208
884
2024-05-06T17:59:37Z
Vrtnis
21
Created blank page
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Main Page
0
1
886
875
2024-05-06T18:02:48Z
Vrtnis
21
/* Resources */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
|
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|}
8960969842789dc22dfaaef248914a05133390f0
887
886
2024-05-06T20:33:27Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
|
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|}
61d0053b0a71c847d2d1b4acd0a86212ec2e55b8
922
887
2024-05-07T02:33:45Z
Modeless
7
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
|
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|}
8db2633bdd9bf2ea653caaeca47a8d6f08ea2c9e
927
922
2024-05-07T04:49:10Z
185.169.0.83
0
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
|
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
|
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|}
9cf89c7b075cd779c7bc32ce60981f7fef320b71
939
927
2024-05-07T19:55:24Z
Vrtnis
21
/* Resources */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
|
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|}
87d2e28f0ed4c1b0f2ce2062d8339112a5159bc3
942
939
2024-05-07T20:04:33Z
Vrtnis
21
/* List of Actuators */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| Precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|}
e02cd5e623f5a0cfec8ab70a9581a9f6bfe3f62e
943
942
2024-05-07T20:05:02Z
Vrtnis
21
/* List of Actuators */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|}
bf013f060fbe89a9f1fa7d35775ab7204417e5ff
K-Scale Motor Controller
0
209
888
2024-05-06T20:49:22Z
Ben
2
Created page with "This is the K-Scale Motor Controller design document. === Requirements === * '''Microcontroller''' ** STM * '''Power supply''' ** Assume we will have 48 volt power to the bo..."
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
** 1 mbps
* '''Sensing'''
** Temperature
** Relative position
** Absolute (single turn) position
** Current sensing
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
3b469fce1ea2dbec0c0effd712b227a99751b5b1
889
888
2024-05-06T21:03:38Z
32.213.80.127
0
/* Requirements */ add mcu's
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
*** Other options:
**** Infineon XMC4800
**** NXP LPC1549
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
** 1 mbps
* '''Sensing'''
** Temperature
** Relative position
** Absolute (single turn) position
** Current sensing
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
3b8d0076d3fb5beb7d4ad4947c73dfc22cd68490
890
889
2024-05-06T21:05:08Z
Ben
2
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other options:
*** Infineon XMC4800
*** NXP LPC1549
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
** 1 mbps
* '''Sensing'''
** Temperature
** Relative position
** Absolute (single turn) position
** Current sensing
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
d20f56b85fabeb5447edc5290f4885b4834cc736
891
890
2024-05-06T21:06:05Z
Ben
2
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other options:
*** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
*** NXP LPC1549
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
** 1 mbps
* '''Sensing'''
** Temperature
** Relative position
** Absolute (single turn) position
** Current sensing
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
=== References ===
<references />
5ec008f43be10466600b34335870ecee30585804
892
891
2024-05-06T21:07:50Z
Ben
2
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other options:
*** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-
microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
*** NXP LPC1549
*** RP2040
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
** 1 mbps
* '''Sensing'''
** Temperature
** Relative position
** Absolute (single turn) position
** Current sensing
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
=== References ===
<references />
1f7c438cf834a226e3bfdf55ae627dacbf36efd5
893
892
2024-05-06T21:11:58Z
Ben
2
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other options:
*** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
*** NXP LPC1549
*** RP2040
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
** 1 mbps
* '''Sensing'''
** Temperature
** Relative position
** Absolute (single turn) position
** Current sensing
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
=== References ===
<references />
c3fc8681c5d61d45912e60fd6583d6b3a81890e8
894
893
2024-05-06T21:13:23Z
Matt
16
/* Requirements */ add sub categories
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other options:
*** integrated motor control MCUs
**** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
**** NXP LPC1549
*** MCU's
**** RP2040
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
** 1 mbps
* '''Sensing'''
** Temperature
** Relative position
** Absolute (single turn) position
** Current sensing
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
=== References ===
<references />
31d6aa47246c99af94a15b100c909c026c405068
895
894
2024-05-06T21:17:49Z
Matt
16
/* Requirements */ add ref for NXP
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other options:
*** integrated motor control MCUs
**** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
**** NXP LPC1549<ref>https://www.digikey.com/en/products/detail/nxp-usa-inc/LPC1549JBD64QL/4696352?utm_adgroup=&utm_source=google&utm_medium=cpc&utm_campaign=PMax%20Shopping_Product_Medium%20ROAS%20Categories&utm_term=&utm_content=&utm_id=go_cmp-20223376311_adg-_ad-__dev-c_ext-_prd-_sig-Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB&gad_source=1&gclid=Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB</ref>
*** MCU's
**** RP2040
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
** 1 mbps
* '''Sensing'''
** Temperature
** Relative position
** Absolute (single turn) position
** Current sensing
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
=== References ===
<references />
2f2e5a74c8fcc84da4c70ab174eeead8120aa137
896
895
2024-05-06T21:18:53Z
Matt
16
/* Requirements */ add CAN transceiver
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other options:
*** integrated motor control MCUs
**** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
**** NXP LPC1549<ref>https://www.digikey.com/en/products/detail/nxp-usa-inc/LPC1549JBD64QL/4696352?utm_adgroup=&utm_source=google&utm_medium=cpc&utm_campaign=PMax%20Shopping_Product_Medium%20ROAS%20Categories&utm_term=&utm_content=&utm_id=go_cmp-20223376311_adg-_ad-__dev-c_ext-_prd-_sig-Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB&gad_source=1&gclid=Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB</ref>
*** MCU's
**** RP2040
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
*** Texas Instruments ISO1050DUBR <ref>https://www.lcsc.com/product-detail/Isolated-CAN-Transceivers_Texas-Instruments-ISO1050DUBR_C16428.html</ref>
** 1 mbps
* '''Sensing'''
** Temperature
** Relative position
** Absolute (single turn) position
** Current sensing
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
=== References ===
<references />
a85d76abe9d7eee86c4c10889d1e9c5eede075e4
897
896
2024-05-06T21:23:01Z
Matt
16
/* Requirements */ Add temp sensor
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other options:
*** integrated motor control MCUs
**** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
**** NXP LPC1549<ref>https://www.digikey.com/en/products/detail/nxp-usa-inc/LPC1549JBD64QL/4696352?utm_adgroup=&utm_source=google&utm_medium=cpc&utm_campaign=PMax%20Shopping_Product_Medium%20ROAS%20Categories&utm_term=&utm_content=&utm_id=go_cmp-20223376311_adg-_ad-__dev-c_ext-_prd-_sig-Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB&gad_source=1&gclid=Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB</ref>
*** MCU's
**** RP2040
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
*** Texas Instruments ISO1050DUBR <ref>https://www.lcsc.com/product-detail/Isolated-CAN-Transceivers_Texas-Instruments-ISO1050DUBR_C16428.html</ref>
** 1 mbps
* '''Sensing'''
** Temperature
*** Maxim Integrated DS18B20 <ref>https://www.lcsc.com/product-detail/Temperature-Sensors_Maxim-Integrated-DS18B20-T-R_C880672.html</ref>
** Relative position
** Absolute (single turn) position
** Current sensing
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
=== References ===
<references />
f910a94cfb3cc27ec9fd3b2e96a5aaefb7bd8e3d
898
897
2024-05-06T21:25:29Z
Ben
2
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other options:
*** Integrated motor control MCUs
**** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
**** NXP LPC1549<ref>https://www.digikey.com/en/products/detail/nxp-usa-inc/LPC1549JBD64QL/4696352?utm_adgroup=&utm_source=google&utm_medium=cpc&utm_campaign=PMax%20Shopping_Product_Medium%20ROAS%20Categories&utm_term=&utm_content=&utm_id=go_cmp-20223376311_adg-_ad-__dev-c_ext-_prd-_sig-Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB&gad_source=1&gclid=Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB</ref>
*** MCUs
**** RP2040
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
*** Texas Instruments ISO1050DUBR <ref>https://www.lcsc.com/product-detail/Isolated-CAN-Transceivers_Texas-Instruments-ISO1050DUBR_C16428.html</ref>
** 1 mbps
* '''Sensing'''
** Temperature
*** Maxim Integrated DS18B20 <ref>https://www.lcsc.com/product-detail/Temperature-Sensors_Maxim-Integrated-DS18B20-T-R_C880672.html</ref>
** Absolute (single turn) position
** Current sensing
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
=== References ===
<references />
e811e23502b08dcec5dd914faaffcde9efcecba1
899
898
2024-05-06T21:28:39Z
Matt
16
/* Requirements */ Add current sensing IC + "Other" sub sections
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other options:
*** Integrated motor control MCUs
**** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
**** NXP LPC1549<ref>https://www.digikey.com/en/products/detail/nxp-usa-inc/LPC1549JBD64QL/4696352?utm_adgroup=&utm_source=google&utm_medium=cpc&utm_campaign=PMax%20Shopping_Product_Medium%20ROAS%20Categories&utm_term=&utm_content=&utm_id=go_cmp-20223376311_adg-_ad-__dev-c_ext-_prd-_sig-Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB&gad_source=1&gclid=Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB</ref>
*** MCUs
**** RP2040
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
*** Other
**** Texas Instruments ISO1050DUBR <ref>https://www.lcsc.com/product-detail/Isolated-CAN-Transceivers_Texas-Instruments-ISO1050DUBR_C16428.html</ref>
** 1 mbps
* '''Sensing'''
** Temperature
*** Other
**** Maxim Integrated DS18B20 <ref>https://www.lcsc.com/product-detail/Temperature-Sensors_Maxim-Integrated-DS18B20-T-R_C880672.html</ref>
** Absolute (single turn) position
** Current sensing
*** Other
**** Allegro ACS770 <ref> https://www.lcsc.com/product-detail/Current-Sensors_Allegro-MicroSystems-LLC-ACS770LCB-050U-PFF-T_C696104.html </ref>
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
=== References ===
<references />
0a890c71d0371d5ed86548f2d8e57437571168db
900
899
2024-05-06T21:38:58Z
Matt
16
/* Requirements */ add hell effect sensors for position
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other options:
*** Integrated motor control MCUs
**** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
**** NXP LPC1549<ref>https://www.digikey.com/en/products/detail/nxp-usa-inc/LPC1549JBD64QL/4696352?utm_adgroup=&utm_source=google&utm_medium=cpc&utm_campaign=PMax%20Shopping_Product_Medium%20ROAS%20Categories&utm_term=&utm_content=&utm_id=go_cmp-20223376311_adg-_ad-__dev-c_ext-_prd-_sig-Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB&gad_source=1&gclid=Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB</ref>
*** MCUs
**** RP2040
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
*** Other
**** Texas Instruments ISO1050DUBR <ref>https://www.lcsc.com/product-detail/Isolated-CAN-Transceivers_Texas-Instruments-ISO1050DUBR_C16428.html</ref>
** 1 mbps
* '''Sensing'''
** Temperature
*** Other
**** Maxim Integrated DS18B20 <ref>https://www.lcsc.com/product-detail/Temperature-Sensors_Maxim-Integrated-DS18B20-T-R_C880672.html</ref>
** Absolute (single turn) position
*** Other
**** Infineon TLE5012B <ref> https://www.lcsc.com/product-detail/Position-Sensor_Infineon-Technologies-TLE5012B-E3005_C539928.html</ref>
**** AMS AS5047P <ref> https://www.lcsc.com/product-detail/Position-Sensor_AMS-AS5047P-ATSM_C962063.html </ref>
** Current sensing
*** Other
**** Allegro ACS770 <ref> https://www.lcsc.com/product-detail/Current-Sensors_Allegro-MicroSystems-LLC-ACS770LCB-050U-PFF-T_C696104.html </ref>
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
=== References ===
<references />
2374c7b47d6d0ee0c9efbd2ac272e3d99d0a4d9f
901
900
2024-05-06T21:43:58Z
Ben
2
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other options:
*** Integrated motor control MCUs
**** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
**** NXP LPC1549<ref>https://www.digikey.com/en/products/detail/nxp-usa-inc/LPC1549JBD64QL/4696352?utm_adgroup=&utm_source=google&utm_medium=cpc&utm_campaign=PMax%20Shopping_Product_Medium%20ROAS%20Categories&utm_term=&utm_content=&utm_id=go_cmp-20223376311_adg-_ad-__dev-c_ext-_prd-_sig-Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB&gad_source=1&gclid=Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB</ref>
*** MCUs
**** RP2040
**** STM32F405RG
***** Used by [[ODrive]]
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
*** Other
**** Texas Instruments ISO1050DUBR <ref>https://www.lcsc.com/product-detail/Isolated-CAN-Transceivers_Texas-Instruments-ISO1050DUBR_C16428.html</ref>
** 1 mbps
* '''Sensing'''
** Temperature
*** Other
**** Maxim Integrated DS18B20 <ref>https://www.lcsc.com/product-detail/Temperature-Sensors_Maxim-Integrated-DS18B20-T-R_C880672.html</ref>
** Absolute (single turn) position
*** Other
**** Infineon TLE5012B <ref> https://www.lcsc.com/product-detail/Position-Sensor_Infineon-Technologies-TLE5012B-E3005_C539928.html</ref>
**** AMS AS5047P <ref> https://www.lcsc.com/product-detail/Position-Sensor_AMS-AS5047P-ATSM_C962063.html </ref>
** Current sensing
*** Other
**** Allegro ACS770 <ref> https://www.lcsc.com/product-detail/Current-Sensors_Allegro-MicroSystems-LLC-ACS770LCB-050U-PFF-T_C696104.html </ref>
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
=== References ===
<references />
fe6fc2ef0f9ee10ae55ffc04eca15c44fc5d4b65
903
901
2024-05-06T21:45:20Z
Matt
16
/* Requirements */ formatting
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other:
*** Integrated motor control MCUs
**** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
**** NXP LPC1549<ref>https://www.digikey.com/en/products/detail/nxp-usa-inc/LPC1549JBD64QL/4696352?utm_adgroup=&utm_source=google&utm_medium=cpc&utm_campaign=PMax%20Shopping_Product_Medium%20ROAS%20Categories&utm_term=&utm_content=&utm_id=go_cmp-20223376311_adg-_ad-__dev-c_ext-_prd-_sig-Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB&gad_source=1&gclid=Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB</ref>
*** MCUs
**** RP2040
**** STM32F405RG
***** Used by [[ODrive]]
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
*** Other
**** Texas Instruments ISO1050DUBR <ref>https://www.lcsc.com/product-detail/Isolated-CAN-Transceivers_Texas-Instruments-ISO1050DUBR_C16428.html</ref>
** 1 mbps
* '''Sensing'''
** Temperature
*** Other
**** Maxim Integrated DS18B20 <ref>https://www.lcsc.com/product-detail/Temperature-Sensors_Maxim-Integrated-DS18B20-T-R_C880672.html</ref>
** Absolute (single turn) position
*** Other
**** Infineon TLE5012B <ref> https://www.lcsc.com/product-detail/Position-Sensor_Infineon-Technologies-TLE5012B-E3005_C539928.html</ref>
**** AMS AS5047P <ref> https://www.lcsc.com/product-detail/Position-Sensor_AMS-AS5047P-ATSM_C962063.html </ref>
** Current sensing
*** Other
**** Allegro ACS770 <ref> https://www.lcsc.com/product-detail/Current-Sensors_Allegro-MicroSystems-LLC-ACS770LCB-050U-PFF-T_C696104.html </ref>
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
=== References ===
<references />
086702a57181f686b016b1eebebb925176339fdf
904
903
2024-05-06T21:47:32Z
Matt
16
/* Requirements */ fix odrive category placement
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other:
*** Integrated motor control MCUs
**** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
**** NXP LPC1549<ref>https://www.digikey.com/en/products/detail/nxp-usa-inc/LPC1549JBD64QL/4696352?utm_adgroup=&utm_source=google&utm_medium=cpc&utm_campaign=PMax%20Shopping_Product_Medium%20ROAS%20Categories&utm_term=&utm_content=&utm_id=go_cmp-20223376311_adg-_ad-__dev-c_ext-_prd-_sig-Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB&gad_source=1&gclid=Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB</ref>
**** STM32F405RG
***** Used by [[ODrive]]
*** MCUs (without integrated motor control)
**** RP2040
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
*** Other
**** Texas Instruments ISO1050DUBR <ref>https://www.lcsc.com/product-detail/Isolated-CAN-Transceivers_Texas-Instruments-ISO1050DUBR_C16428.html</ref>
** 1 mbps
* '''Sensing'''
** Temperature
*** Other
**** Maxim Integrated DS18B20 <ref>https://www.lcsc.com/product-detail/Temperature-Sensors_Maxim-Integrated-DS18B20-T-R_C880672.html</ref>
** Absolute (single turn) position
*** Other
**** Infineon TLE5012B <ref> https://www.lcsc.com/product-detail/Position-Sensor_Infineon-Technologies-TLE5012B-E3005_C539928.html</ref>
**** AMS AS5047P <ref> https://www.lcsc.com/product-detail/Position-Sensor_AMS-AS5047P-ATSM_C962063.html </ref>
** Current sensing
*** Other
**** Allegro ACS770 <ref> https://www.lcsc.com/product-detail/Current-Sensors_Allegro-MicroSystems-LLC-ACS770LCB-050U-PFF-T_C696104.html </ref>
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
=== References ===
<references />
2d45a9c347a98d982b8e90a55716a975d1fd4596
905
904
2024-05-06T21:48:03Z
Matt
16
/* Requirements */
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other:
*** Integrated motor control MCUs
**** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
**** NXP LPC1549<ref>https://www.digikey.com/en/products/detail/nxp-usa-inc/LPC1549JBD64QL/4696352?utm_adgroup=&utm_source=google&utm_medium=cpc&utm_campaign=PMax%20Shopping_Product_Medium%20ROAS%20Categories&utm_term=&utm_content=&utm_id=go_cmp-20223376311_adg-_ad-__dev-c_ext-_prd-_sig-Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB&gad_source=1&gclid=Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB</ref>
**** STM32F405RG
***** Used by [[ODrive]]
***MCUs (without integrated motor control)
**** RP2040
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
*** Other
**** Texas Instruments ISO1050DUBR <ref>https://www.lcsc.com/product-detail/Isolated-CAN-Transceivers_Texas-Instruments-ISO1050DUBR_C16428.html</ref>
** 1 mbps
* '''Sensing'''
** Temperature
*** Other
**** Maxim Integrated DS18B20 <ref>https://www.lcsc.com/product-detail/Temperature-Sensors_Maxim-Integrated-DS18B20-T-R_C880672.html</ref>
** Absolute (single turn) position
*** Other
**** Infineon TLE5012B <ref> https://www.lcsc.com/product-detail/Position-Sensor_Infineon-Technologies-TLE5012B-E3005_C539928.html</ref>
**** AMS AS5047P <ref> https://www.lcsc.com/product-detail/Position-Sensor_AMS-AS5047P-ATSM_C962063.html </ref>
** Current sensing
*** Other
**** Allegro ACS770 <ref> https://www.lcsc.com/product-detail/Current-Sensors_Allegro-MicroSystems-LLC-ACS770LCB-050U-PFF-T_C696104.html </ref>
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
=== References ===
<references />
0727c86a1b7f5cfda296eae976395868014ed8d3
906
905
2024-05-06T21:54:17Z
Matt
16
/* Requirements */ add comparison for mcus
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other:
*** '''Integrated motor control MCUs'''
**** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
**** NXP LPC1549<ref>https://www.digikey.com/en/products/detail/nxp-usa-inc/LPC1549JBD64QL/4696352?utm_adgroup=&utm_source=google&utm_medium=cpc&utm_campaign=PMax%20Shopping_Product_Medium%20ROAS%20Categories&utm_term=&utm_content=&utm_id=go_cmp-20223376311_adg-_ad-__dev-c_ext-_prd-_sig-Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB&gad_source=1&gclid=Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB</ref>
**** STM32F405RG
***** Used by [[ODrive]]
*** For high-precision and complex control tasks with an emphasis on real-time networking and performance, the Infineon XMC4800 is suitable.
*** If budget and simplicity are key considerations, and less intensive processing is required, the NXP LPC1549 offers a good balance.
*** For a balance between performance and ecosystem support, with flexibility in hardware and software, the STM32F405RG is an excellent choice.
*** '''MCUs (without integrated motor control)'''
**** RP2040
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
*** Other
**** Texas Instruments ISO1050DUBR <ref>https://www.lcsc.com/product-detail/Isolated-CAN-Transceivers_Texas-Instruments-ISO1050DUBR_C16428.html</ref>
** 1 mbps
* '''Sensing'''
** Temperature
*** Other
**** Maxim Integrated DS18B20 <ref>https://www.lcsc.com/product-detail/Temperature-Sensors_Maxim-Integrated-DS18B20-T-R_C880672.html</ref>
** Absolute (single turn) position
*** Other
**** Infineon TLE5012B <ref> https://www.lcsc.com/product-detail/Position-Sensor_Infineon-Technologies-TLE5012B-E3005_C539928.html</ref>
**** AMS AS5047P <ref> https://www.lcsc.com/product-detail/Position-Sensor_AMS-AS5047P-ATSM_C962063.html </ref>
** Current sensing
*** Other
**** Allegro ACS770 <ref> https://www.lcsc.com/product-detail/Current-Sensors_Allegro-MicroSystems-LLC-ACS770LCB-050U-PFF-T_C696104.html </ref>
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
=== References ===
<references />
d4052092f772f00e99ce3875cfe41f22f61bee2e
907
906
2024-05-06T22:08:37Z
Matt
16
/* Requirements */ add ref for STG
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other:
*** '''Integrated motor control MCUs'''
**** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
**** NXP LPC1549<ref>https://www.digikey.com/en/products/detail/nxp-usa-inc/LPC1549JBD64QL/4696352?utm_adgroup=&utm_source=google&utm_medium=cpc&utm_campaign=PMax%20Shopping_Product_Medium%20ROAS%20Categories&utm_term=&utm_content=&utm_id=go_cmp-20223376311_adg-_ad-__dev-c_ext-_prd-_sig-Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB&gad_source=1&gclid=Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB</ref>
**** STM32F405RG<ref>https://www.lcsc.com/product-detail/Microcontroller-Units-MCUs-MPUs-SOCs_STMicroelectronics-STM32F405RGT6_C15742.html</ref>
***** Used by [[ODrive]]
*** For high-precision and complex control tasks with an emphasis on real-time networking and performance, the Infineon XMC4800 is suitable.
*** If budget and simplicity are key considerations, and less intensive processing is required, the NXP LPC1549 offers a good balance.
*** For a balance between performance and ecosystem support, with flexibility in hardware and software, the STM32F405RG is an excellent choice.
*** '''MCUs (without integrated motor control)'''
**** RP2040
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
*** Other
**** Texas Instruments ISO1050DUBR <ref>https://www.lcsc.com/product-detail/Isolated-CAN-Transceivers_Texas-Instruments-ISO1050DUBR_C16428.html</ref>
** 1 mbps
* '''Sensing'''
** Temperature
*** Other
**** Maxim Integrated DS18B20 <ref>https://www.lcsc.com/product-detail/Temperature-Sensors_Maxim-Integrated-DS18B20-T-R_C880672.html</ref>
** Absolute (single turn) position
*** Other
**** Infineon TLE5012B <ref> https://www.lcsc.com/product-detail/Position-Sensor_Infineon-Technologies-TLE5012B-E3005_C539928.html</ref>
**** AMS AS5047P <ref> https://www.lcsc.com/product-detail/Position-Sensor_AMS-AS5047P-ATSM_C962063.html </ref>
** Current sensing
*** Other
**** Allegro ACS770 <ref> https://www.lcsc.com/product-detail/Current-Sensors_Allegro-MicroSystems-LLC-ACS770LCB-050U-PFF-T_C696104.html </ref>
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
=== References ===
<references />
b2d8af72be49bbb49fd7131465050563668e6598
908
907
2024-05-06T22:23:58Z
Matt
16
/* Requirements */ add xmc4500
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other:
*** '''Integrated motor control MCUs'''
**** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
**** Infineon XMC4500<ref>https://www.digikey.com/en/products/detail/infineon-technologies/XMC4500F100K1024ACXQSA1/4807912</ref>
**** NXP LPC1549<ref>https://www.digikey.com/en/products/detail/nxp-usa-inc/LPC1549JBD64QL/4696352?utm_adgroup=&utm_source=google&utm_medium=cpc&utm_campaign=PMax%20Shopping_Product_Medium%20ROAS%20Categories&utm_term=&utm_content=&utm_id=go_cmp-20223376311_adg-_ad-__dev-c_ext-_prd-_sig-Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB&gad_source=1&gclid=Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB</ref>
**** STM32F405RG<ref>https://www.lcsc.com/product-detail/Microcontroller-Units-MCUs-MPUs-SOCs_STMicroelectronics-STM32F405RGT6_C15742.html</ref>
***** Used by [[ODrive]]
*** For high-precision and complex control tasks with an emphasis on real-time networking and performance, the Infineon XMC4800 is suitable.
*** If budget and simplicity are key considerations, and less intensive processing is required, the NXP LPC1549 offers a good balance.
*** For a balance between performance and ecosystem support, with flexibility in hardware and software, the STM32F405RG is an excellent choice.
*** '''MCUs (without integrated motor control)'''
**** RP2040
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
*** Other
**** Texas Instruments ISO1050DUBR <ref>https://www.lcsc.com/product-detail/Isolated-CAN-Transceivers_Texas-Instruments-ISO1050DUBR_C16428.html</ref>
** 1 mbps
* '''Sensing'''
** Temperature
*** Other
**** Maxim Integrated DS18B20 <ref>https://www.lcsc.com/product-detail/Temperature-Sensors_Maxim-Integrated-DS18B20-T-R_C880672.html</ref>
** Absolute (single turn) position
*** Other
**** Infineon TLE5012B <ref> https://www.lcsc.com/product-detail/Position-Sensor_Infineon-Technologies-TLE5012B-E3005_C539928.html</ref>
**** AMS AS5047P <ref> https://www.lcsc.com/product-detail/Position-Sensor_AMS-AS5047P-ATSM_C962063.html </ref>
** Current sensing
*** Other
**** Allegro ACS770 <ref> https://www.lcsc.com/product-detail/Current-Sensors_Allegro-MicroSystems-LLC-ACS770LCB-050U-PFF-T_C696104.html </ref>
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
=== References ===
<references />
587e53137f3ade83b366ff4a0ce227ef8c0b7f59
909
908
2024-05-06T22:27:27Z
Matt
16
/* Requirements */ Update descriptions for when to use certain MCU's
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other:
*** '''Integrated motor control MCUs'''
**** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
**** Infineon XMC4500<ref>https://www.digikey.com/en/products/detail/infineon-technologies/XMC4500F100K1024ACXQSA1/4807912</ref>
**** NXP LPC1549<ref>https://www.digikey.com/en/products/detail/nxp-usa-inc/LPC1549JBD64QL/4696352?utm_adgroup=&utm_source=google&utm_medium=cpc&utm_campaign=PMax%20Shopping_Product_Medium%20ROAS%20Categories&utm_term=&utm_content=&utm_id=go_cmp-20223376311_adg-_ad-__dev-c_ext-_prd-_sig-Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB&gad_source=1&gclid=Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB</ref>
**** STM32F405RG<ref>https://www.lcsc.com/product-detail/Microcontroller-Units-MCUs-MPUs-SOCs_STMicroelectronics-STM32F405RGT6_C15742.html</ref>
***** Used by [[ODrive]]
*** For high-precision and complex control tasks with an emphasis on real-time networking and performance, the Infineon XMC4800 is suitable, especially if you need robust communication features like EtherCAT for interconnected device control.
*** If budget and simplicity are key considerations, and less intensive processing is required, the NXP LPC1549 offers a good balance. It provides essential capabilities for motor control without the complexities of advanced networking, making it ideal for straightforward applications.
*** For a balance between performance and ecosystem support, with flexibility in hardware and software, the STM32F405RG is an excellent choice. It offers a powerful ARM Cortex-M4 processor and a rich development ecosystem, suitable for developers looking for extensive community support and flexibility.
*** When high-end communication like EtherCAT isn't a necessity but robust control capabilities are still required, the Infineon XMC4500 is a strong contender. It provides similar processing power and peripheral support as the XMC4800 but without the integrated networking capabilities, which can be advantageous for applications focusing solely on control tasks without the need for complex network communications.
*** '''MCUs (without integrated motor control)'''
**** RP2040
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
*** Other
**** Texas Instruments ISO1050DUBR <ref>https://www.lcsc.com/product-detail/Isolated-CAN-Transceivers_Texas-Instruments-ISO1050DUBR_C16428.html</ref>
** 1 mbps
* '''Sensing'''
** Temperature
*** Other
**** Maxim Integrated DS18B20 <ref>https://www.lcsc.com/product-detail/Temperature-Sensors_Maxim-Integrated-DS18B20-T-R_C880672.html</ref>
** Absolute (single turn) position
*** Other
**** Infineon TLE5012B <ref> https://www.lcsc.com/product-detail/Position-Sensor_Infineon-Technologies-TLE5012B-E3005_C539928.html</ref>
**** AMS AS5047P <ref> https://www.lcsc.com/product-detail/Position-Sensor_AMS-AS5047P-ATSM_C962063.html </ref>
** Current sensing
*** Other
**** Allegro ACS770 <ref> https://www.lcsc.com/product-detail/Current-Sensors_Allegro-MicroSystems-LLC-ACS770LCB-050U-PFF-T_C696104.html </ref>
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
=== References ===
<references />
c92bec144debd491657ede6657df21ce5fa419d4
916
909
2024-05-06T23:54:58Z
Ben
2
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other:
*** '''Integrated motor control MCUs'''
**** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
**** Infineon XMC4500<ref>https://www.digikey.com/en/products/detail/infineon-technologies/XMC4500F100K1024ACXQSA1/4807912</ref>
**** NXP LPC1549<ref>https://www.digikey.com/en/products/detail/nxp-usa-inc/LPC1549JBD64QL/4696352?utm_adgroup=&utm_source=google&utm_medium=cpc&utm_campaign=PMax%20Shopping_Product_Medium%20ROAS%20Categories&utm_term=&utm_content=&utm_id=go_cmp-20223376311_adg-_ad-__dev-c_ext-_prd-_sig-Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB&gad_source=1&gclid=Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB</ref>
**** STM32F405RG<ref>https://www.lcsc.com/product-detail/Microcontroller-Units-MCUs-MPUs-SOCs_STMicroelectronics-STM32F405RGT6_C15742.html</ref>
***** Used by [[ODrive]]
*** For high-precision and complex control tasks with an emphasis on real-time networking and performance, the Infineon XMC4800 is suitable, especially if you need robust communication features like EtherCAT for interconnected device control.
*** If budget and simplicity are key considerations, and less intensive processing is required, the NXP LPC1549 offers a good balance. It provides essential capabilities for motor control without the complexities of advanced networking, making it ideal for straightforward applications.
*** For a balance between performance and ecosystem support, with flexibility in hardware and software, the STM32F405RG is an excellent choice. It offers a powerful ARM Cortex-M4 processor and a rich development ecosystem, suitable for developers looking for extensive community support and flexibility.
*** When high-end communication like EtherCAT isn't a necessity but robust control capabilities are still required, the Infineon XMC4500 is a strong contender. It provides similar processing power and peripheral support as the XMC4800 but without the integrated networking capabilities, which can be advantageous for applications focusing solely on control tasks without the need for complex network communications.
*** '''MCUs (without integrated motor control)'''
**** RP2040
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
*** Other
**** Texas Instruments ISO1050DUBR <ref>https://www.lcsc.com/product-detail/Isolated-CAN-Transceivers_Texas-Instruments-ISO1050DUBR_C16428.html</ref>
** 1 Mbps
* '''Sensing'''
** Temperature
*** Other
**** Maxim Integrated DS18B20 <ref>https://www.lcsc.com/product-detail/Temperature-Sensors_Maxim-Integrated-DS18B20-T-R_C880672.html</ref>
** Absolute (single turn) position
*** Other
**** Infineon TLE5012B <ref> https://www.lcsc.com/product-detail/Position-Sensor_Infineon-Technologies-TLE5012B-E3005_C539928.html</ref>
**** AMS AS5047P <ref> https://www.lcsc.com/product-detail/Position-Sensor_AMS-AS5047P-ATSM_C962063.html </ref>
** Current sensing
*** Other
**** Allegro ACS770 <ref> https://www.lcsc.com/product-detail/Current-Sensors_Allegro-MicroSystems-LLC-ACS770LCB-050U-PFF-T_C696104.html </ref>
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
=== References ===
<references />
ff61892576dfffa43c0ad22d821c2537c090263d
947
916
2024-05-08T14:53:51Z
Matt
16
/* Design */
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other:
*** '''Integrated motor control MCUs'''
**** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
**** Infineon XMC4500<ref>https://www.digikey.com/en/products/detail/infineon-technologies/XMC4500F100K1024ACXQSA1/4807912</ref>
**** NXP LPC1549<ref>https://www.digikey.com/en/products/detail/nxp-usa-inc/LPC1549JBD64QL/4696352?utm_adgroup=&utm_source=google&utm_medium=cpc&utm_campaign=PMax%20Shopping_Product_Medium%20ROAS%20Categories&utm_term=&utm_content=&utm_id=go_cmp-20223376311_adg-_ad-__dev-c_ext-_prd-_sig-Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB&gad_source=1&gclid=Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB</ref>
**** STM32F405RG<ref>https://www.lcsc.com/product-detail/Microcontroller-Units-MCUs-MPUs-SOCs_STMicroelectronics-STM32F405RGT6_C15742.html</ref>
***** Used by [[ODrive]]
*** For high-precision and complex control tasks with an emphasis on real-time networking and performance, the Infineon XMC4800 is suitable, especially if you need robust communication features like EtherCAT for interconnected device control.
*** If budget and simplicity are key considerations, and less intensive processing is required, the NXP LPC1549 offers a good balance. It provides essential capabilities for motor control without the complexities of advanced networking, making it ideal for straightforward applications.
*** For a balance between performance and ecosystem support, with flexibility in hardware and software, the STM32F405RG is an excellent choice. It offers a powerful ARM Cortex-M4 processor and a rich development ecosystem, suitable for developers looking for extensive community support and flexibility.
*** When high-end communication like EtherCAT isn't a necessity but robust control capabilities are still required, the Infineon XMC4500 is a strong contender. It provides similar processing power and peripheral support as the XMC4800 but without the integrated networking capabilities, which can be advantageous for applications focusing solely on control tasks without the need for complex network communications.
*** '''MCUs (without integrated motor control)'''
**** RP2040
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
*** Other
**** Texas Instruments ISO1050DUBR <ref>https://www.lcsc.com/product-detail/Isolated-CAN-Transceivers_Texas-Instruments-ISO1050DUBR_C16428.html</ref>
** 1 Mbps
* '''Sensing'''
** Temperature
*** Other
**** Maxim Integrated DS18B20 <ref>https://www.lcsc.com/product-detail/Temperature-Sensors_Maxim-Integrated-DS18B20-T-R_C880672.html</ref>
** Absolute (single turn) position
*** Other
**** Infineon TLE5012B <ref> https://www.lcsc.com/product-detail/Position-Sensor_Infineon-Technologies-TLE5012B-E3005_C539928.html</ref>
**** AMS AS5047P <ref> https://www.lcsc.com/product-detail/Position-Sensor_AMS-AS5047P-ATSM_C962063.html </ref>
** Current sensing
*** Other
**** Allegro ACS770 <ref> https://www.lcsc.com/product-detail/Current-Sensors_Allegro-MicroSystems-LLC-ACS770LCB-050U-PFF-T_C696104.html </ref>
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
*[ Flux link here ]
* Component Pins
** LCD
** CAN Transceiver
** Push Buttons
=== References ===
<references />
97835758ff5fe62d265898bda122261c2c6d0ef9
948
947
2024-05-08T15:02:16Z
Matt
16
/* Design */
wikitext
text/x-wiki
This is the K-Scale Motor Controller design document.
=== Requirements ===
* '''Microcontroller'''
** STM
** Other:
*** '''Integrated motor control MCUs'''
**** Infineon XMC4800<ref>https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/</ref>
**** Infineon XMC4500<ref>https://www.digikey.com/en/products/detail/infineon-technologies/XMC4500F100K1024ACXQSA1/4807912</ref>
**** NXP LPC1549<ref>https://www.digikey.com/en/products/detail/nxp-usa-inc/LPC1549JBD64QL/4696352?utm_adgroup=&utm_source=google&utm_medium=cpc&utm_campaign=PMax%20Shopping_Product_Medium%20ROAS%20Categories&utm_term=&utm_content=&utm_id=go_cmp-20223376311_adg-_ad-__dev-c_ext-_prd-_sig-Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB&gad_source=1&gclid=Cj0KCQjw_-GxBhC1ARIsADGgDjuYk7Hf3F-L_hoQ_4E-fPfjELknu3EAmm9IstEnc92wSAbIMEt0UZAaAsneEALw_wcB</ref>
**** STM32F405RG<ref>https://www.lcsc.com/product-detail/Microcontroller-Units-MCUs-MPUs-SOCs_STMicroelectronics-STM32F405RGT6_C15742.html</ref>
***** Used by [[ODrive]]
*** For high-precision and complex control tasks with an emphasis on real-time networking and performance, the Infineon XMC4800 is suitable, especially if you need robust communication features like EtherCAT for interconnected device control.
*** If budget and simplicity are key considerations, and less intensive processing is required, the NXP LPC1549 offers a good balance. It provides essential capabilities for motor control without the complexities of advanced networking, making it ideal for straightforward applications.
*** For a balance between performance and ecosystem support, with flexibility in hardware and software, the STM32F405RG is an excellent choice. It offers a powerful ARM Cortex-M4 processor and a rich development ecosystem, suitable for developers looking for extensive community support and flexibility.
*** When high-end communication like EtherCAT isn't a necessity but robust control capabilities are still required, the Infineon XMC4500 is a strong contender. It provides similar processing power and peripheral support as the XMC4800 but without the integrated networking capabilities, which can be advantageous for applications focusing solely on control tasks without the need for complex network communications.
*** '''MCUs (without integrated motor control)'''
**** RP2040
* '''Power supply'''
** Assume we will have 48 volt power to the board
* '''Communication'''
** CAN bus
*** Other
**** Texas Instruments ISO1050DUBR <ref>https://www.lcsc.com/product-detail/Isolated-CAN-Transceivers_Texas-Instruments-ISO1050DUBR_C16428.html</ref>
** 1 Mbps
* '''Sensing'''
** Temperature
*** Other
**** Maxim Integrated DS18B20 <ref>https://www.lcsc.com/product-detail/Temperature-Sensors_Maxim-Integrated-DS18B20-T-R_C880672.html</ref>
** Absolute (single turn) position
*** Other
**** Infineon TLE5012B <ref> https://www.lcsc.com/product-detail/Position-Sensor_Infineon-Technologies-TLE5012B-E3005_C539928.html</ref>
**** AMS AS5047P <ref> https://www.lcsc.com/product-detail/Position-Sensor_AMS-AS5047P-ATSM_C962063.html </ref>
** Current sensing
*** Other
**** Allegro ACS770 <ref> https://www.lcsc.com/product-detail/Current-Sensors_Allegro-MicroSystems-LLC-ACS770LCB-050U-PFF-T_C696104.html </ref>
* '''Programming port'''
** USB C
=== Design ===
* [https://github.com/kscalelabs/motor-controller See this repository]
=== References ===
<references />
ff61892576dfffa43c0ad22d821c2537c090263d
ODrive
0
170
902
874
2024-05-06T21:44:33Z
Ben
2
wikitext
text/x-wiki
'''ODrive''' is a precision motor controller that is highly applicable in robotics domains, CNC machines, and other areas demanding high-performance motor control. It provides accurate control over hobby-grade and industrial electric motors.
{{infobox actuator
| name = ODrive Motor Controller
| manufacturer = ODrive Robotics
| cost =
| purchase_link = https://odriverobotics.com/
| nominal_torque =
| peak_torque =
| weight =
| dimensions =
| gear_ratio =
| voltage =
| cad_link =
| interface =
| gear_type =
}}
== Product Description ==
ODrive is primarily used for the control of electric motors with precision. The device is well-suited for accurate control of both hobby-grade and industrial motors. The controller has been specially designed for applications demanding high performance. The flexibility of the ODrive controller allows it to be used in a variety of applications including, but not limited to, robotics and CNC machines.
== Industrial Application ==
The ODrive motor controller finds its application in numerous sectors, particularly in robotics and CNC machines. This is mainly due to its flexibility, which facilitates control over a variety of motors ranging from hobby-grade to industrial ones. Furthermore, the precision and high performance offered by the controller make it a preferred choice for professionals.
== Technical Specifications ==
While the exact technical specifications of the ODrive motor controller can vary depending on the model, the controller is generally characterized by the following features:
* High performance motor control: Ensures precision and accuracy.
* Flexible interface: Facilitates easy integration with a variety of motors.
* Compatible with various types of electric motors: Ensures wide application.
== Integrated Circuits ==
* '''MCU''': STM32F405RG<ref>https://www.st.com/en/microcontrollers-microprocessors/stm32f405rg.html</ref>
== References ==
<references />
[[Category:Actuators]]
aa149883cde57b753c3591906cd15b7dc9f3e64e
File:RMD X8 PCB bottom view.jpg
6
210
910
2024-05-06T22:29:53Z
Ben
2
wikitext
text/x-wiki
RMD X8 PCB bottom view
5a8a86c86a7c660035ccdf6c7d0a09bdea993583
File:RMD X6 exploded view.jpg
6
211
911
2024-05-06T22:30:34Z
Ben
2
wikitext
text/x-wiki
RMD X6 exploded view
38269032403fbebefab2d559789b39a394a3720f
File:RMD X-Series CAN Bus.png
6
212
914
2024-05-06T22:32:51Z
Ben
2
wikitext
text/x-wiki
RMD X-Series CAN Bus
6d29aadd6ac32c09f614c89058d4601f8c60ef80
LASER Robotics
0
213
923
2024-05-07T02:37:43Z
Modeless
7
Created page with "LASER Robotics seems to be founded by people from USC's Dynamic Robotics and Control Laboratory. They have a humanoid called HECTOR v2.<ref>https://laser-robotics.com/hector-v..."
wikitext
text/x-wiki
LASER Robotics seems to be founded by people from USC's Dynamic Robotics and Control Laboratory. They have a humanoid called HECTOR v2.<ref>https://laser-robotics.com/hector-v2/</ref>
2626f36cafb6480c2f97b92c3a28b08f1a173d84
924
923
2024-05-07T02:41:03Z
Modeless
7
wikitext
text/x-wiki
LASER Robotics seems to be founded by people from USC's Dynamic Robotics and Control Laboratory. They have a small .85m tall humanoid called [[HECTOR V2]] with open source control and simulation software aimed at research.<ref>https://laser-robotics.com/hector-v2/</ref>
64b5062d242f111e356277e0464145b1a7f0e52d
933
924
2024-05-07T05:12:18Z
Ben
2
wikitext
text/x-wiki
LASER Robotics seems to be founded by people from USC's Dynamic Robotics and Control Laboratory. They have a small .85m tall humanoid called [[HECTOR V2]] with open source control and simulation software aimed at research.<ref>https://laser-robotics.com/hector-v2/</ref>
=== References ===
<references/>
e803bac2792a515757969bc5d70c6b1735aaa900
Berkeley Blue
0
214
928
2024-05-07T04:49:43Z
185.169.0.83
0
Created page with "A low-cost backdrivable 7-DOF robotic manipulator.<ref>https://berkeleyopenarms.github.io/</ref> === References === <references/>"
wikitext
text/x-wiki
A low-cost backdrivable 7-DOF robotic manipulator.<ref>https://berkeleyopenarms.github.io/</ref>
=== References ===
<references/>
2558b9f89a4939aca41dffb330ee4003e36b38ab
K-Scale Cluster
0
16
930
865
2024-05-07T05:08:49Z
Ben
2
wikitext
text/x-wiki
The K-Scale Labs clusters are shared cluster for robotics research. This page contains notes on how to access the clusters.
== Onboarding ==
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config</code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
After setting this up, you can use the command <code>ssh cluster</code> to directly connect.
You can also access via VS Code. Tutorial of using <code>ssh</code> in VS Code is [https://code.visualstudio.com/docs/remote/ssh-tutorial here].
Please inform us if you have any issues!
=== Notes ===
* You may need to restart <code>ssh</code> to get it working.
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
=== Cluster 1 ===
=== Cluster 2 ===
The cluster has 8 available nodes (each with 8 GPUs):
<syntaxhighlight lang="text">
compute-permanent-node-68
compute-permanent-node-285
compute-permanent-node-493
compute-permanent-node-625
compute-permanent-node-626
compute-permanent-node-749
compute-permanent-node-801
compute-permanent-node-580
</syntaxhighlight>
When you ssh-in, you log in to the bastion node pure-caribou-bastion from which you can log in to any other node where you can test your code.
== Reserving a GPU ==
Here is a script you can use for getting an interactive node through Slurm.
<syntaxhighlight lang="bash">
gpunode () {
local job_id=$(squeue -u $USER -h -t R -o %i -n gpunode)
if [[ -n $job_id ]]
then
echo "Attaching to job ID $job_id"
srun --jobid=$job_id --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --pty $SLURM_XPUNODE_SHELL
return 0
fi
echo "Creating new job"
srun --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --interactive --job-name=gpunode --pty $SLURM_XPUNODE_SHELL
}
</syntaxhighlight>
Example env vars:
<syntaxhighlight lang="bash">
export SLURM_GPUNODE_PARTITION='compute'
export SLURM_GPUNODE_NUM_GPUS=1
export SLURM_GPUNODE_CPUS_PER_GPU=4
export SLURM_XPUNODE_SHELL='/bin/bash'
</syntaxhighlight>
Integrate the example script into your shell then run <code>gpunode</code>.
You can see partition options by running <code>sinfo</code>.
You might get an error like this: <code>groups: cannot find name for group ID 1506</code>. But things should still run fine. Check with <code>nvidia-smi</code>.
=== Useful Commands ===
Set a node state back to normal:
<syntaxhighlight lang="bash">
sudo scontrol update nodename='nodename' state=resume
</syntaxhighlight>
[[Category:K-Scale]]
1fd3b99c33a47197d00f66fa784a036813390556
HECTOR V2
0
215
931
2024-05-07T05:11:31Z
Ben
2
Created page with "Open-source humanoid robot from USC.<ref>https://github.com/DRCL-USC/Hector_Simulation</ref> === References === <references/>"
wikitext
text/x-wiki
Open-source humanoid robot from USC.<ref>https://github.com/DRCL-USC/Hector_Simulation</ref>
=== References ===
<references/>
1a3dbfa9a8cedc111c3a882f8ee1009b58969c83
932
931
2024-05-07T05:11:48Z
Ben
2
wikitext
text/x-wiki
Open-source humanoid robot from USC.<ref>https://github.com/DRCL-USC/Hector_Simulation</ref><ref>https://laser-robotics.com/hector-v2/</ref>
=== References ===
<references/>
7977068d4016c1acee3e50cb9050c71f3410584d
Getting Started with Humanoid Robots
0
193
940
882
2024-05-07T19:57:01Z
Vrtnis
21
/* Experimenting with Your Humanoid Robot */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete and a work in progress; you can help by expanding it!
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the [https://humanoids.wiki/w/MIT_Cheetah MIT Cheetah] actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
Some things to consider:
The springs in SEAs are where the magic happens. Choosing the right stiffness is a balancing act between getting precise torque control and avoiding sluggish responses. Since the spring is constantly flexing, you need sensors tuned to give accurate torque measurements. Do it regularly, and you'll keep those movements smooth and predictable. You want finely-tuned control loops to make SEAs shine. A high-frequency loop can make your robot more agile in handling external forces. PID controllers are a solid starting point, or you can try out some advanced strategies.
Friction can really impact your torque control, especially in gearboxes and linkages. Using low-friction components and proper lubrication will help keep everything moving smoothly. Make sure your spring is positioned directly between the actuator and the joint. If not, your robot won’t get the full benefit of force sensing, and that precision will be lost.If your robot is doing a lot of high-impact activities, the springs can wear out. Keep an eye on them to avoid breakdowns when you least expect them. SEAs thrive on real-time feedback. Ensure your software can handle data quickly, maybe using a real-time operating system or optimized signal processing.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. If you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
Here is the [https://fab.cba.mit.edu/classes/865.18/motion/papers/mit-cheetah-actuator.pdf MIT research paper] if you are interested in a deeper dive.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The humanoid robotics community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Custom Actuator Developments ====
[https://irisdynamics.com/products/orca-series Iris Dynamics electric linear actuators] suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with. The MuJoCo_MPC repository, created by Google DeepMind, is a toolset that combines Model Predictive Control (MPC) with the MuJoCo physics engine to create real-time behavior synthesis. With the advanced MJX extension, which uses GPU acceleration, it can simulate multiple environments in parallel. One approach is to try to replicate the techniques detailed in the AMP (Adversarial Motion Priors) paper to achieve agile humanoid behavior. For example, implementing a humanoid get-up sequence, which matches what was described in the AMP research.
There’s been collaboration between different projects, like Stompy, to get humanoid simulations up and running. You could try converting Gymnasium to handle the URDF (Universal Robot Description Format) file format. Although converting to MJCF (MuJoCo's XML-based format) may present some challenges, we can still get it to work and refine the motor and actuator setup.
Although MuJoCo can be slower in single-environment simulations, the MJX extension and its parallel processing potential make it a solid competitor. Compared to enviroments, like NVIDIA's Isaac Gym, MuJoCo might stand out for its extensibility and rapid development. One goal could be to try recreate the walking, running, and getting-up behaviors described in the AMP paper and use them as a foundation for training robust humanoid movements in simulation.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
More resources are available at [https://humanoids.wiki/w/Learning_algorithms]
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to an open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs. Check out [https://humanoids.wiki/w/Stompy Stompy!]
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
d2813031631c0d09b1c6e27bfddb1632e70152fa
941
940
2024-05-07T19:58:15Z
Vrtnis
21
/* Best Practices for Virtual Testing */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete and a work in progress; you can help by expanding it!
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the [https://humanoids.wiki/w/MIT_Cheetah MIT Cheetah] actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
Some things to consider:
The springs in SEAs are where the magic happens. Choosing the right stiffness is a balancing act between getting precise torque control and avoiding sluggish responses. Since the spring is constantly flexing, you need sensors tuned to give accurate torque measurements. Do it regularly, and you'll keep those movements smooth and predictable. You want finely-tuned control loops to make SEAs shine. A high-frequency loop can make your robot more agile in handling external forces. PID controllers are a solid starting point, or you can try out some advanced strategies.
Friction can really impact your torque control, especially in gearboxes and linkages. Using low-friction components and proper lubrication will help keep everything moving smoothly. Make sure your spring is positioned directly between the actuator and the joint. If not, your robot won’t get the full benefit of force sensing, and that precision will be lost.If your robot is doing a lot of high-impact activities, the springs can wear out. Keep an eye on them to avoid breakdowns when you least expect them. SEAs thrive on real-time feedback. Ensure your software can handle data quickly, maybe using a real-time operating system or optimized signal processing.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. If you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
Here is the [https://fab.cba.mit.edu/classes/865.18/motion/papers/mit-cheetah-actuator.pdf MIT research paper] if you are interested in a deeper dive.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The humanoid robotics community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Custom Actuator Developments ====
[https://irisdynamics.com/products/orca-series Iris Dynamics electric linear actuators] suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
Reinforcement Learning Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with. The MuJoCo_MPC repository, created by Google DeepMind, is a toolset that combines Model Predictive Control (MPC) with the MuJoCo physics engine to create real-time behavior synthesis. With the advanced MJX extension, which uses GPU acceleration, it can simulate multiple environments in parallel. One approach is to try to replicate the techniques detailed in the AMP (Adversarial Motion Priors) paper to achieve agile humanoid behavior. For example, implementing a humanoid get-up sequence, which matches what was described in the AMP research.
There’s been collaboration between different projects, like Stompy, to get humanoid simulations up and running. You could try converting Gymnasium to handle the URDF (Universal Robot Description Format) file format. Although converting to MJCF (MuJoCo's XML-based format) may present some challenges, we can still get it to work and refine the motor and actuator setup.
Although MuJoCo can be slower in single-environment simulations, the MJX extension and its parallel processing potential make it a solid competitor. Compared to enviroments, like NVIDIA's Isaac Gym, MuJoCo might stand out for its extensibility and rapid development. One goal could be to try recreate the walking, running, and getting-up behaviors described in the AMP paper and use them as a foundation for training robust humanoid movements in simulation.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
More resources are available at [https://humanoids.wiki/w/Learning_algorithms Learning Algorithms]
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to an open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs. Check out [https://humanoids.wiki/w/Stompy Stompy!]
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
4e144263418dd20bdd1d540f94b223463b8597be
Ben's Ideas
0
216
946
2024-05-08T01:37:08Z
Ben
2
Created page with "Tracking document for [[User:Ben]]'s random ideas for things to work on. * Discord bot to automatically post robotics and relevant machine learning papers to the K-Scale Labs..."
wikitext
text/x-wiki
Tracking document for [[User:Ben]]'s random ideas for things to work on.
* Discord bot to automatically post robotics and relevant machine learning papers to the K-Scale Labs Discord group
** Maybe summarize the paper results using ChatGPT?
cf833af97d410caa2ed68f311594c1f135100868
K-Scale Motor Programmer
0
217
949
2024-05-08T15:07:11Z
Matt
16
Add motor programmer page
wikitext
text/x-wiki
=== Purpose ===
aka "The Doodad"
=== Design ===
[https://www.flux.ai/k-scale-labs/motor-programmer?editor=schematic&embed=1 Flux Design Link]
* Component Pins
** LCD
** CAN Transceiver
** Push Buttons
=== Components ===
* MCU
* LCD
** HD44780
d4bd2bf48530ee4145a4c86450d53e4064b8d7a8
950
949
2024-05-08T15:09:49Z
Matt
16
/* Components */
wikitext
text/x-wiki
=== Purpose ===
aka "The Doodad"
=== Design ===
[https://www.flux.ai/k-scale-labs/motor-programmer?editor=schematic&embed=1 Flux Design Link]
* Component Pins
** LCD
** CAN Transceiver
** Push Buttons
=== Components ===
* MCU
* LCD
** Hitachi HD44780
658142f7995123d5a3e03c4d5ebec4eea521ae7d
K-Scale Motor Programmer
0
217
951
950
2024-05-08T15:30:36Z
Matt
16
/* Design */
wikitext
text/x-wiki
=== Purpose ===
aka "The Doodad"
=== Design ===
[https://www.flux.ai/k-scale-labs/motor-programmer?editor=schematic&embed=1 Flux Design Link]
* Component Pins
** LCD
** CAN Transceiver
** Push Buttons
** STM
*** Internal Pull up resistor required
=== Components ===
* MCU
* LCD
** Hitachi HD44780
f565fba130c4d6be3d79c578cf982f242eecfe67
952
951
2024-05-08T15:34:47Z
Matt
16
/* Design */
wikitext
text/x-wiki
=== Purpose ===
aka "The Doodad"
=== Design ===
[https://www.flux.ai/k-scale-labs/motor-programmer?editor=schematic&embed=1 Flux Design Link]
* Component Pins
** LCD
** CAN Transceiver
** Push Buttons
** STM
*** Internal pull down resistor required
=== Components ===
* MCU
* LCD
** Hitachi HD44780
3479fb6c46775ba28425e5a578d12a81644b4b41
Jetson Orin Notes
0
218
956
2024-05-08T21:12:57Z
Tom
23
Created page with "Notes on programming/interfacing with Jetson Orin hardware."
wikitext
text/x-wiki
Notes on programming/interfacing with Jetson Orin hardware.
4402ff9a963e565cb37c8994f02ffe654f7a70e3
957
956
2024-05-08T21:21:36Z
Tom
23
wikitext
text/x-wiki
Notes on programming/interfacing with Jetson Orin hardware.
# Upgrading AGX to Jetson Linux 36.3
## BSP approach (avoids SDK Manager)
* Requires Ubuntu 22.04. Very unhappy to work on Gentoo.
* Requires Intel/AMD 64bit CPU.
* Download "Driver Package (BSP)" from https://developer.nvidia.com/embedded/jetson-linux
* Unpack (as root, get used to doing most of this as root), preserving privileges
- `tar xjpf ...`
* Download "Sample Root Filesystem"
* Unpack (as root..) into rootfs directory inside of the BSP archive above.
* Run './apply_binaries.sh' from the BSP
- Note: If apply_binaries (or frankly, anything, this is brittle) fails, remove and recreate rootfs - the OS might be left in an unbootable state.
* Reboot AGX into "Recovery Mode" - hold the recovery button and reset button, release simultaneously ((sic) reset first?)
* Connect USB-C cable to the debug port ("front" USB-c)
* Nvidia AGX device should appear in the `lsusb`
* Run './nvautoflash.sh'
* Watch for few minutes, typically it crashes early, then go for lunch.
b8cfcabc797bc60e0fdf4b7ec9eea17488ff9101
960
957
2024-05-08T21:47:45Z
Budzianowski
19
wikitext
text/x-wiki
Notes on programming/interfacing with Jetson Orin hardware.
# Upgrading AGX to Jetson Linux 36.3
## BSP approach (avoids SDK Manager)
* Requires Ubuntu 22.04. Very unhappy to work on Gentoo.
* Requires Intel/AMD 64bit CPU.
* Download "Driver Package (BSP)" from https://developer.nvidia.com/embedded/jetson-linux
* Unpack (as root, get used to doing most of this as root), preserving privileges
- `tar xjpf ...`
* Download "Sample Root Filesystem"
* Unpack (as root..) into rootfs directory inside of the BSP archive above.
* Run './apply_binaries.sh' from the BSP
- Note: If apply_binaries (or frankly, anything, this is brittle) fails, remove and recreate rootfs - the OS might be left in an unbootable state.
* Reboot AGX into "Recovery Mode" - hold the recovery button and reset button, release simultaneously ((sic) reset first?)
* Connect USB-C cable to the debug port ("front" USB-c)
* Nvidia AGX device should appear in the `lsusb`
* Run './nvautoflash.sh'
* Watch for few minutes, typically it crashes early, then go for lunch.
[[Firmware]]
0d0d7525fce2f6e905beb82228b66a75b7df005c
961
960
2024-05-08T21:48:41Z
Budzianowski
19
wikitext
text/x-wiki
Notes on programming/interfacing with Jetson Orin hardware.
# Upgrading AGX to Jetson Linux 36.3
## BSP approach (avoids SDK Manager)
* Requires Ubuntu 22.04. Very unhappy to work on Gentoo.
* Requires Intel/AMD 64bit CPU.
* Download "Driver Package (BSP)" from https://developer.nvidia.com/embedded/jetson-linux
* Unpack (as root, get used to doing most of this as root), preserving privileges
- `tar xjpf ...`
* Download "Sample Root Filesystem"
* Unpack (as root..) into rootfs directory inside of the BSP archive above.
* Run './apply_binaries.sh' from the BSP
- Note: If apply_binaries (or frankly, anything, this is brittle) fails, remove and recreate rootfs - the OS might be left in an unbootable state.
* Reboot AGX into "Recovery Mode" - hold the recovery button and reset button, release simultaneously ((sic) reset first?)
* Connect USB-C cable to the debug port ("front" USB-c)
* Nvidia AGX device should appear in the `lsusb`
* Run './nvautoflash.sh'
* Watch for few minutes, typically it crashes early, then go for lunch.
[[Category: Firmware]]
aa3877daafd814bd7b737663faba30147c33f9a4
965
961
2024-05-09T04:29:23Z
Ben
2
wikitext
text/x-wiki
Notes on programming/interfacing with Jetson Orin hardware.
=== Upgrading AGX to Jetson Linux 36.3 ===
==== BSP approach (avoids SDK Manager) ====
* Requires Ubuntu 22.04. Very unhappy to work on Gentoo.
* Requires Intel/AMD 64bit CPU.
* Download "Driver Package (BSP)" from [https://developer.nvidia.com/embedded/jetson-linux here]
* Unpack (as root, get used to doing most of this as root), preserving privileges
** <code>tar xjpf ...</code>
* Download "Sample Root Filesystem"
* Unpack (as root..) into rootfs directory inside of the BSP archive above.
* Run <code>./apply_binaries.sh</code> from the BSP
** Note: If apply_binaries (or frankly, anything, this is brittle) fails, remove and recreate rootfs - the OS might be left in an unbootable state.
* Reboot AGX into "Recovery Mode" - hold the recovery button and reset button, release simultaneously ((sic) reset first?)
* Connect USB-C cable to the debug port ("front" USB-c)
* Nvidia AGX device should appear in the <code>lsusb</code>
* Run <code>./nvautoflash.sh</code>
* Watch for few minutes, typically it crashes early, then go for lunch.
[[Category: Firmware]]
65a28872b9149b997b827924eb52f67d1c44a6be
982
965
2024-05-10T22:23:22Z
Budzianowski
19
wikitext
text/x-wiki
Notes on programming/interfacing with Jetson Orin hardware.
=== Upgrading AGX to Jetson Linux 36.3 ===
==== BSP approach (avoids SDK Manager) ====
* Requires Ubuntu 22.04. Very unhappy to work on Gentoo.
* Requires Intel/AMD 64bit CPU.
* Download "Driver Package (BSP)" from [https://developer.nvidia.com/embedded/jetson-linux here]
* Unpack (as root, get used to doing most of this as root), preserving privileges
** <code>tar xjpf ...</code>
* Download "Sample Root Filesystem"
* Unpack (as root..) into rootfs directory inside of the BSP archive above.
* Run sudo ./tools/l4t_flash_prerequisites.sh
* Run <code>./apply_binaries.sh</code> from the BSP
** Note: If apply_binaries (or frankly, anything, this is brittle) fails, remove and recreate rootfs - the OS might be left in an unbootable state.
* Reboot AGX into "Recovery Mode" - hold the recovery button and reset button, release simultaneously ((sic) reset first?)
* Connect USB-C cable to the debug port ("front" USB-c)
* Nvidia AGX device should appear in the <code>lsusb</code> under NVIDIA CORP. APX
* Run <code>./flash.sh </code> Different options for different usecases(https://docs.nvidia.com/jetson/archives/r36.3/DeveloperGuide/IN/QuickStart.html#in-quickstart)
Jetson AGX Orin Developer Kit (eMMC):
$ sudo ./flash.sh jetson-agx-orin-devkit internal
* Watch for few minutes, typically it crashes early, then go for lunch.
[[Category: Firmware]]
071f59c04c2524b3905f281963f6223a0d3ce3a2
User:Tom
2
219
958
2024-05-08T21:23:10Z
Tom
23
Created page with " - Tom "qdot.me" Mloduchowski"
wikitext
text/x-wiki
- Tom "qdot.me" Mloduchowski
fe02db12102ee6a58d648f1d48ce795280313f5b
Main Page
0
1
959
943
2024-05-08T21:42:54Z
108.211.178.220
0
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|}
8bcfb12984f64974d144a143e4140b3fc1108963
963
959
2024-05-09T03:28:36Z
Modeless
7
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|}
18aade77769ec45ea72eb74dbfe7a78b354ffb9e
996
963
2024-05-13T07:26:40Z
Ben
2
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|}
6ccb0a7f9b51c2682bc6937875c6ce219294038e
Category:Firmware
14
220
962
2024-05-08T21:48:54Z
Budzianowski
19
Created blank page
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
4NE-1
0
221
964
2024-05-09T03:29:43Z
Modeless
7
Created page with "German company NEURA Robotics announced a full humanoid called 4NE-1.<ref>https://neura-robotics.com/products/4ne-1</ref><ref>https://www.youtube.com/watch?v=tUgV4RylVU0</ref>"
wikitext
text/x-wiki
German company NEURA Robotics announced a full humanoid called 4NE-1.<ref>https://neura-robotics.com/products/4ne-1</ref><ref>https://www.youtube.com/watch?v=tUgV4RylVU0</ref>
f1eb449e58778ae2c6d0288e10d2833610669873
User:Vedant
2
222
966
2024-05-09T10:46:06Z
Vedant
24
Created page with "{| class="wikitable" |- ! Vedant Jhawar !! |- | Name || Vedant Jhawar |- | Organization || K-Scale Labs |- | Title || Technical Intern |- |}"
wikitext
text/x-wiki
{| class="wikitable"
|-
! Vedant Jhawar !!
|-
| Name || Vedant Jhawar
|-
| Organization || K-Scale Labs
|-
| Title || Technical Intern
|-
|}
276b737332680efefef379118a77a1e1318a416a
967
966
2024-05-09T10:47:11Z
Vedant
24
wikitext
text/x-wiki
{{infobox person
| name = Vedant Jhawar
| organization = [[K-Scale Labs]]
| title = Technical Intern
}}
[[Category: K-Scale Employees]]
0c2b35a23e45810856abe7bfb28bb55d3093726b
CAN/IMU/Cameras with Jetson Orin
0
20
968
225
2024-05-09T16:05:24Z
Budzianowski
19
Budzianowski moved page [[Jetson Orin]] to [[CAN/IMU/Cameras with Jetson Orin]]
wikitext
text/x-wiki
The Jetson Orin is a development board from Nvidia.
=== CAN Bus ===
See [https://docs.nvidia.com/jetson/archives/r34.1/DeveloperGuide/text/HR/ControllerAreaNetworkCan.html here] for notes on configuring the CAN bus for the Jetson.
[[File:Can bus connections 2.png|none|200px|thumb]]
Install dependencies:
<syntaxhighlight lang="bash">
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt upgrade
sudo apt install g++ python3.11-dev
</syntaxhighlight>
Initialize the CAN bus on startup:
<syntaxhighlight lang="bash">
#!/bin/bash
# Set pinmux.
busybox devmem 0x0c303000 32 0x0000C400
busybox devmem 0x0c303008 32 0x0000C458
busybox devmem 0x0c303010 32 0x0000C400
busybox devmem 0x0c303018 32 0x0000C458
# Install modules.
modprobe can
modprobe can_raw
modprobe mttcan
# Turn off CAN.
ip link set down can0
ip link set down can1
# Set parameters.
ip link set can0 type can bitrate 1000000 dbitrate 1000000 berr-reporting on fd on loopback off
ip link set can1 type can bitrate 1000000 dbitrate 1000000 berr-reporting on fd on loopback off
# Turn on CAN.
ip link set up can0
ip link set up can1
</syntaxhighlight>
You can run this script automatically on startup by writing a service configuration to (for example) <code>/etc/systemd/system/can_setup.service</code>
<syntaxhighlight lang="text">
[Unit]
Description=Initialize CAN Interfaces
After=network.target
[Service]
Type=oneshot
ExecStart=/opt/kscale/enable_can.sh
RemainAfterExit=true
[Install]
WantedBy=multi-user.target
</syntaxhighlight>
To enable this, run:
<syntaxhighlight lang="bash">
sudo systemctl enable can_setup
sudo systemctl start can_setup
</syntaxhighlight>
=== Cameras ===
==== Arducam IMX 219 ====
* [https://www.arducam.com/product/arducam-imx219-multi-camera-kit-for-the-nvidia-jetson-agx-orin/ Product Page]
** Shipping was pretty fast
** Order a couple backup cameras because a couple of the cameras that they shipped came busted
* [https://docs.arducam.com/Nvidia-Jetson-Camera/Nvidia-Jetson-Orin-Series/NVIDIA-Jetson-AGX-Orin/Quick-Start-Guide/ Quick start guide]
Run the installation script:
<syntaxhighlight lang="bash">
wget https://github.com/ArduCAM/MIPI_Camera/releases/download/v0.0.3/install_full.sh
chmod u+x install_full.sh
./install_full.sh -m imx219
</syntaxhighlight>
Supported kernel versions (see releases [https://github.com/ArduCAM/MIPI_Camera/releases here]):
* <code>5.10.104-tegra-35.3.1</code>
* <code>5.10.120-tegra-35.4.1</code>
Install an older kernel from [https://developer.nvidia.com/embedded/jetson-linux-archive here]. This required downgrading to Ubuntu 20.04 (only changing <code>/etc/os-version</code>).
Install dependencies:
<syntaxhighlight lang="bash">
sudo apt update
sudo apt install \
gstreamer1.0-tools \
gstreamer1.0-alsa \
gstreamer1.0-plugins-base \
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad \
gstreamer1.0-plugins-ugly \
gstreamer1.0-libav
sudo apt install
libgstreamer1.0-dev \
libgstreamer-plugins-base1.0-dev \
libgstreamer-plugins-good1.0-dev \
libgstreamer-plugins-bad1.0-dev
sudo apt install \
v4l-utils \
ffmpeg
</syntaxhighlight>
Make sure the camera shows up:
<syntaxhighlight lang="bash">
v4l2-ctl --list-formats-ext
</syntaxhighlight>
Capture a frame from the camera:
<syntaxhighlight lang="bash">
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! "video/x-raw(memory:NVMM), width=1280, height=720, framerate=60/1" ! nvvidconv ! jpegenc snapshot=TRUE ! filesink location=test.jpg
</syntaxhighlight>
Alternatively, use the following Python code:
<syntaxhighlight lang="bash">
import cv2
gst_str = (
'nvarguscamerasrc sensor-id=0 ! '
'video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)60/1 ! '
'nvvidconv flip-method=0 ! '
'video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! '
'videoconvert ! '
'video/x-raw, format=(string)BGR ! '
'appsink'
)
cap = cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)
while True:
ret, frame = cap.read()
if ret:
print(frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
cap.release()
cv2.destroyAllWindows()
</syntaxhighlight>
=== IMU ===
We're using the [https://ozzmaker.com/product/berryimu-accelerometer-gyroscope-magnetometer-barometricaltitude-sensor/ BerryIMU v3]. To use it, connect pin 3 on the Jetson to SDA and pin 5 to SCL for I2C bus 7. You can verify the connection is successful if the following command matches:
<syntaxhighlight lang="bash">
$ sudo i2cdetect -y -r 7
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- 77
</syntaxhighlight>
The equivalent command on the Raspberry Pi should use bus 1:
<syntaxhighlight lang="bash">
sudo i2cdetect -y -r 1
</syntaxhighlight>
The default addresses are:
* <code>0x6A</code>: Gyroscope and accelerometer
* <code>0x1C</code>: Magnetometer
* <code>0x77</code>: Barometer
[[Category: Hardware]]
[[Category: Electronics]]
51030b57b98b338a704c3f6daae5082cb8b122c3
Jetson Orin
0
223
969
2024-05-09T16:05:26Z
Budzianowski
19
Budzianowski moved page [[Jetson Orin]] to [[CAN/IMU/Cameras with Jetson Orin]]
wikitext
text/x-wiki
#REDIRECT [[CAN/IMU/Cameras with Jetson Orin]]
cc10907f7af0d9ca273063eeb79b951ec04e1607
Isaac Sim Automator
0
224
970
2024-05-09T20:03:59Z
Vrtnis
21
Created page with "== Isaac Sim Automator == The Isaac Sim Automator tool automates the deployment of Isaac Sim to public clouds, enabling efficient and scalable simulations. === Installation =..."
wikitext
text/x-wiki
== Isaac Sim Automator ==
The Isaac Sim Automator tool automates the deployment of Isaac Sim to public clouds, enabling efficient and scalable simulations.
=== Installation ===
==== Installing Docker ====
Ensure Docker is installed on your system for container management. For installation guidance, see [https://docs.docker.com/engine/install/ Docker Installation].
==== Obtaining NGC API Key ====
Obtain an NGC API Key to download Docker images from NVIDIA NGC. This can be done at [https://ngc.nvidia.com/setup/api-key NGC API Key Setup].
==== Building the Container ====
To build the Automator container, use the following command in your project root directory:
<source lang="bash">
./build
</source>
This builds and tags the Isaac Sim Automator container as 'isa'.
=== Usage ===
==== Running Automator Commands ====
You can run the Automator commands in two ways:
* Enter the automator container and run commands inside:
<source lang="bash">
./run
./somecommand
</source>
* Or, prepend the command with ./run:
<source lang="bash">
./run ./somecommand <parameters>
</source>
Examples include:
<source lang="bash">
./run ./deploy-aws
./run ./destroy my-deployment
</source>
==== Deploying Isaac Sim ====
Choose the appropriate cloud provider (AWS, GCP, Azure, Alibaba Cloud) and follow the provided steps to deploy Isaac Sim using the automator container.
== K-Scale Sim Library ==
The K-Scale Sim Library is built atop Isaac Gym for simulating Stompy, providing interfaces for defined tasks like walking and getting up.
=== Getting Started with K-Scale Sim Library ===
==== Initial Setup ====
First, clone the K-Scale Sim Library repository and set up the environment:
<source lang="bash">
git clone https://github.com/kscalelabs/sim.git
cd sim
conda create --name kscale-sim-library python=3.8.19
conda activate kscale-sim-library
make install-dev
</source>
==== Installing Dependencies ====
After setting up the base environment, download and install necessary third-party packages:
<source lang="bash">
wget https://developer.nvidia.com/isaac-gym/IsaacGym_Preview_4_Package.tar.gz
tar -xvf IsaacGym_Preview_4_Package.tar.gz
conda env config vars set ISAACGYM_PATH=`pwd`/isaacgym
conda deactivate
conda activate kscale-sim-library
make install-third-party-external
</source>
=== Running Experiments ===
==== Setting Up Experiments ====
Download and prepare the Stompy model for experiments:
<source lang="bash">
wget https://media.kscale.dev/stompy.tar.gz
tar -xzvf stompy.tar.gz
python sim/scripts/create_fixed_torso.py
export MODEL_DIR=stompy
</source>
==== Training and Evaluation ====
* For leg-specific tasks:
<source lang="bash">
python sim/humanoid_gym/train.py --task=legs_ppo --num_envs=4096 --headless
</source>
* For full-body tasks:
<source lang="bash">
python sim/humanoid_gym/train.py --task=stompy_ppo --num_envs=4096 --headless
</source>
Evaluate the models on CPU:
<source lang="bash">
python sim/humanoid_gym/play.py --task=legs_ppo --sim_device=cpu
</source>
=== Troubleshooting ===
Common issues and solutions for setting up and running Isaac Gym and K-Scale simulations.
<source lang="bash">
git submodule update --init --recursive
export LD_LIBRARY_PATH=PATH_TO_YOUR_ENV/lib:$LD_LIBRARY_PATH
sudo apt-get install vulkan1
</source>
9bed181ad0605992dde731edbad2cf05082e67fb
971
970
2024-05-09T20:10:38Z
Vrtnis
21
/* Isaac Sim Automator */
wikitext
text/x-wiki
== Isaac Sim Automator ==
The '''Isaac Sim Automator''' is essentially your go-to tool for setting up and managing Isaac Sim on various cloud platforms, and it does all this with the help of Docker. Docker packages up an application with all its dependencies into a container that can run anywhere. This means you don't have to worry about your simulation behaving differently from one environment to another — whether it's on your laptop or in the cloud.
Using the Automator is super straightforward. It comes with a set of command-line tools that make deploying to clouds like AWS, Google Cloud, Azure, and Alibaba Cloud almost as easy as sending an email. Here’s how you can deploy Isaac Sim to, say, AWS:
<source lang="bash">
./deploy-aws
</source>
Or if you're more into Google Cloud, you'd use:
<source lang="bash">
./deploy-gcp
</source>
These commands kick off the entire process to get Isaac Sim up and running on your preferred cloud service. They manage all the techie stuff like setting up resources and configuring instances, which means less hassle for you.
In a nutshell, the Isaac Sim Automator is fantastic if you’re diving into the K-Scale Sim Library. It simplifies all the tricky cloud setup stuff, letting you focus on creating and running your simulations. Whether you're a researcher, a developer, or just someone playing around with simulations, this tool can save you a ton of time and effort.
=== Installation ===
==== Installing Docker ====
Ensure Docker is installed on your system for container management. For installation guidance, see [https://docs.docker.com/engine/install/ Docker Installation].
==== Obtaining NGC API Key ====
Obtain an NGC API Key to download Docker images from NVIDIA NGC. This can be done at [https://ngc.nvidia.com/setup/api-key NGC API Key Setup].
==== Building the Container ====
To build the Automator container, use the following command in your project root directory:
<source lang="bash">
./build
</source>
This builds and tags the Isaac Sim Automator container as 'isa'.
=== Usage ===
==== Running Automator Commands ====
You can run the Automator commands in two ways:
* Enter the automator container and run commands inside:
<source lang="bash">
./run
./somecommand
</source>
* Or, prepend the command with ./run:
<source lang="bash">
./run ./somecommand <parameters>
</source>
Examples include:
<source lang="bash">
./run ./deploy-aws
./run ./destroy my-deployment
</source>
==== Deploying Isaac Sim ====
Choose the appropriate cloud provider (AWS, GCP, Azure, Alibaba Cloud) and follow the provided steps to deploy Isaac Sim using the automator container.
== K-Scale Sim Library ==
The K-Scale Sim Library is built atop Isaac Gym for simulating Stompy, providing interfaces for defined tasks like walking and getting up.
=== Getting Started with K-Scale Sim Library ===
==== Initial Setup ====
First, clone the K-Scale Sim Library repository and set up the environment:
<source lang="bash">
git clone https://github.com/kscalelabs/sim.git
cd sim
conda create --name kscale-sim-library python=3.8.19
conda activate kscale-sim-library
make install-dev
</source>
==== Installing Dependencies ====
After setting up the base environment, download and install necessary third-party packages:
<source lang="bash">
wget https://developer.nvidia.com/isaac-gym/IsaacGym_Preview_4_Package.tar.gz
tar -xvf IsaacGym_Preview_4_Package.tar.gz
conda env config vars set ISAACGYM_PATH=`pwd`/isaacgym
conda deactivate
conda activate kscale-sim-library
make install-third-party-external
</source>
=== Running Experiments ===
==== Setting Up Experiments ====
Download and prepare the Stompy model for experiments:
<source lang="bash">
wget https://media.kscale.dev/stompy.tar.gz
tar -xzvf stompy.tar.gz
python sim/scripts/create_fixed_torso.py
export MODEL_DIR=stompy
</source>
==== Training and Evaluation ====
* For leg-specific tasks:
<source lang="bash">
python sim/humanoid_gym/train.py --task=legs_ppo --num_envs=4096 --headless
</source>
* For full-body tasks:
<source lang="bash">
python sim/humanoid_gym/train.py --task=stompy_ppo --num_envs=4096 --headless
</source>
Evaluate the models on CPU:
<source lang="bash">
python sim/humanoid_gym/play.py --task=legs_ppo --sim_device=cpu
</source>
=== Troubleshooting ===
Common issues and solutions for setting up and running Isaac Gym and K-Scale simulations.
<source lang="bash">
git submodule update --init --recursive
export LD_LIBRARY_PATH=PATH_TO_YOUR_ENV/lib:$LD_LIBRARY_PATH
sudo apt-get install vulkan1
</source>
a0917046510ae64286dcf1f69cba4042f6dde290
972
971
2024-05-09T20:24:22Z
Vrtnis
21
wikitext
text/x-wiki
== Isaac Sim Automator ==
The '''Isaac Sim Automator''' is essentially your go-to tool for setting up and managing Isaac Sim on various cloud platforms, and it does all this with the help of Docker. Docker packages up an application with all its dependencies into a container that can run anywhere. This means you don't have to worry about your simulation behaving differently from one environment to another — whether it's on your laptop or in the cloud.
Using the Automator is super straightforward. It comes with a set of command-line tools that make deploying to clouds like AWS, Google Cloud, Azure, and Alibaba Cloud. Here’s how you can deploy Isaac Sim to, say, AWS:
<source lang="bash">
./deploy-aws
</source>
Or if you're more into Google Cloud, you'd use:
<source lang="bash">
./deploy-gcp
</source>
These commands kick off the entire process to get Isaac Sim up and running on your preferred cloud service. They manage details like setting up resources and configuring instances.
In a nutshell, the Isaac Sim Automator is fantastic if you’re diving into the K-Scale Sim Library. It simplifies cloud setup, letting you focus on creating and running your simulations. Whether you're a researcher, a developer, or a beginner to robotics, this tool can save you a ton of time and effort.
=== Installation ===
==== Installing Docker ====
Ensure Docker is installed on your system for container management. For installation guidance, see [https://docs.docker.com/engine/install/ Docker Installation].
==== Obtaining NGC API Key ====
Obtain an NGC API Key to download Docker images from NVIDIA NGC. This can be done at [https://ngc.nvidia.com/setup/api-key NGC API Key Setup].
==== Building the Container ====
To build the Automator container, use the following command in your project root directory:
<source lang="bash">
./build
</source>
This builds and tags the Isaac Sim Automator container as 'isa'.
=== Usage ===
==== Running Automator Commands ====
You can run the Automator commands in two ways:
* Enter the automator container and run commands inside:
<source lang="bash">
./run
./somecommand
</source>
* Or, prepend the command with ./run:
<source lang="bash">
./run ./somecommand <parameters>
</source>
Examples include:
<source lang="bash">
./run ./deploy-aws
./run ./destroy my-deployment
</source>
==== Deploying Isaac Sim ====
Choose the appropriate cloud provider (AWS, GCP, Azure, Alibaba Cloud) and follow the provided steps to deploy Isaac Sim using the automator container.
== K-Scale Sim Library ==
The K-Scale Sim Library is built atop Isaac Gym for simulating Stompy, providing interfaces for defined tasks like walking and getting up.
=== Getting Started with K-Scale Sim Library ===
==== Initial Setup ====
First, clone the K-Scale Sim Library repository and set up the environment:
<source lang="bash">
git clone https://github.com/kscalelabs/sim.git
cd sim
conda create --name kscale-sim-library python=3.8.19
conda activate kscale-sim-library
make install-dev
</source>
==== Installing Dependencies ====
After setting up the base environment, download and install necessary third-party packages:
<source lang="bash">
wget https://developer.nvidia.com/isaac-gym/IsaacGym_Preview_4_Package.tar.gz
tar -xvf IsaacGym_Preview_4_Package.tar.gz
conda env config vars set ISAACGYM_PATH=`pwd`/isaacgym
conda deactivate
conda activate kscale-sim-library
make install-third-party-external
</source>
=== Running Experiments ===
==== Setting Up Experiments ====
Download and prepare the Stompy model for experiments:
<source lang="bash">
wget https://media.kscale.dev/stompy.tar.gz
tar -xzvf stompy.tar.gz
python sim/scripts/create_fixed_torso.py
export MODEL_DIR=stompy
</source>
==== Training and Evaluation ====
* For leg-specific tasks:
<source lang="bash">
python sim/humanoid_gym/train.py --task=legs_ppo --num_envs=4096 --headless
</source>
* For full-body tasks:
<source lang="bash">
python sim/humanoid_gym/train.py --task=stompy_ppo --num_envs=4096 --headless
</source>
Evaluate the models on CPU:
<source lang="bash">
python sim/humanoid_gym/play.py --task=legs_ppo --sim_device=cpu
</source>
=== Troubleshooting ===
Common issues and solutions for setting up and running Isaac Gym and K-Scale simulations.
<source lang="bash">
git submodule update --init --recursive
export LD_LIBRARY_PATH=PATH_TO_YOUR_ENV/lib:$LD_LIBRARY_PATH
sudo apt-get install vulkan1
</source>
7356970102cfdcd28f2ac67ecd6b5fd2127ab569
973
972
2024-05-09T20:51:56Z
Vrtnis
21
/* Isaac Sim Automator */
wikitext
text/x-wiki
== Isaac Sim Automator ==
The [https://github.com/NVIDIA-Omniverse/IsaacSim-Automator/ Isaac Sim Automator] is essentially your go-to tool for setting up and managing Isaac Sim on various cloud platforms, and it does all this with the help of Docker. Docker packages up an application with all its dependencies into a container that can run anywhere. This means you don't have to worry about your simulation behaving differently from one environment to another — whether it's on your laptop or in the cloud.
Using the Automator is super straightforward. It comes with a set of command-line tools that make deploying to clouds like AWS, Google Cloud, Azure, and Alibaba Cloud. Here’s how you can deploy Isaac Sim to, say, AWS:
<source lang="bash">
./deploy-aws
</source>
Or if you're more into Google Cloud, you'd use:
<source lang="bash">
./deploy-gcp
</source>
These commands kick off the entire process to get Isaac Sim up and running on your preferred cloud service. They manage details like setting up resources and configuring instances.
In a nutshell, the Isaac Sim Automator is fantastic if you’re diving into the K-Scale Sim Library. It simplifies cloud setup, letting you focus on creating and running your simulations. Whether you're a researcher, a developer, or a beginner to robotics, this tool can save you a ton of time and effort.
=== Installation ===
==== Installing Docker ====
Ensure Docker is installed on your system for container management. For installation guidance, see [https://docs.docker.com/engine/install/ Docker Installation].
==== Obtaining NGC API Key ====
Obtain an NGC API Key to download Docker images from NVIDIA NGC. This can be done at [https://ngc.nvidia.com/setup/api-key NGC API Key Setup].
==== Building the Container ====
To build the Automator container, use the following command in your project root directory:
<source lang="bash">
./build
</source>
This builds and tags the Isaac Sim Automator container as 'isa'.
=== Usage ===
==== Running Automator Commands ====
You can run the Automator commands in two ways:
* Enter the automator container and run commands inside:
<source lang="bash">
./run
./somecommand
</source>
* Or, prepend the command with ./run:
<source lang="bash">
./run ./somecommand <parameters>
</source>
Examples include:
<source lang="bash">
./run ./deploy-aws
./run ./destroy my-deployment
</source>
==== Deploying Isaac Sim ====
Choose the appropriate cloud provider (AWS, GCP, Azure, Alibaba Cloud) and follow the provided steps to deploy Isaac Sim using the automator container.
== K-Scale Sim Library ==
The K-Scale Sim Library is built atop Isaac Gym for simulating Stompy, providing interfaces for defined tasks like walking and getting up.
=== Getting Started with K-Scale Sim Library ===
==== Initial Setup ====
First, clone the K-Scale Sim Library repository and set up the environment:
<source lang="bash">
git clone https://github.com/kscalelabs/sim.git
cd sim
conda create --name kscale-sim-library python=3.8.19
conda activate kscale-sim-library
make install-dev
</source>
==== Installing Dependencies ====
After setting up the base environment, download and install necessary third-party packages:
<source lang="bash">
wget https://developer.nvidia.com/isaac-gym/IsaacGym_Preview_4_Package.tar.gz
tar -xvf IsaacGym_Preview_4_Package.tar.gz
conda env config vars set ISAACGYM_PATH=`pwd`/isaacgym
conda deactivate
conda activate kscale-sim-library
make install-third-party-external
</source>
=== Running Experiments ===
==== Setting Up Experiments ====
Download and prepare the Stompy model for experiments:
<source lang="bash">
wget https://media.kscale.dev/stompy.tar.gz
tar -xzvf stompy.tar.gz
python sim/scripts/create_fixed_torso.py
export MODEL_DIR=stompy
</source>
==== Training and Evaluation ====
* For leg-specific tasks:
<source lang="bash">
python sim/humanoid_gym/train.py --task=legs_ppo --num_envs=4096 --headless
</source>
* For full-body tasks:
<source lang="bash">
python sim/humanoid_gym/train.py --task=stompy_ppo --num_envs=4096 --headless
</source>
Evaluate the models on CPU:
<source lang="bash">
python sim/humanoid_gym/play.py --task=legs_ppo --sim_device=cpu
</source>
=== Troubleshooting ===
Common issues and solutions for setting up and running Isaac Gym and K-Scale simulations.
<source lang="bash">
git submodule update --init --recursive
export LD_LIBRARY_PATH=PATH_TO_YOUR_ENV/lib:$LD_LIBRARY_PATH
sudo apt-get install vulkan1
</source>
42f21ba77ab4c6c4f34c4d5f94aa12652ea662a1
User:Paweł
2
225
974
2024-05-09T23:53:01Z
Budzianowski
19
Created page with "hello"
wikitext
text/x-wiki
hello
aaf4c61ddcc5e8a2dabede0f3b482cd9aea9434d
975
974
2024-05-09T23:54:04Z
Budzianowski
19
wikitext
text/x-wiki
[http://budzianowski.github.io/ Paweł] is one of the co-founders of [[K-Scale Labs]].
[[Category: K-Scale Employees]]
1594da6290ba47ad956555057bbdcbce62f26923
976
975
2024-05-09T23:54:41Z
Budzianowski
19
wikitext
text/x-wiki
[http://budzianowski.github.io/ Paweł] is one of the co-founders of [[K-Scale Labs]].
AI, cycling and a tea with lemon.
[[Category: K-Scale Employees]]
8dc76d8c7383ff260783b233032a787edde1af09
File:Stompy 1.png
6
226
977
2024-05-10T00:29:25Z
Ben
2
wikitext
text/x-wiki
Stompy 1
8bc1a9c3b2ff6d0f29975f88f8692c36f88a69f1
File:Stompy 2.png
6
227
978
2024-05-10T00:29:40Z
Ben
2
wikitext
text/x-wiki
Stompy 2
e0908a5eb185820dad543ca27d2df7f64a056f8e
File:Stompy 3.png
6
228
979
2024-05-10T00:29:53Z
Ben
2
wikitext
text/x-wiki
Stompy 3
156eb4ce99187583e5cb9c1e9eb6225d0ad0508e
File:Stompy 4.png
6
229
980
2024-05-10T00:30:06Z
Ben
2
wikitext
text/x-wiki
Stompy 4
67791452427b9b3b5d1bf0e3254fbc15ee179cb9
Stompy
0
2
981
721
2024-05-10T00:30:39Z
Ben
2
wikitext
text/x-wiki
[[File:Stompy.jpg|right|300px|thumb]]
{{infobox robot
| name = Stompy
| organization = [[K-Scale Labs]]
| cost = USD 10,000
}}
Stompy is an open-source humanoid robot developed by [[K-Scale Labs]]. Here are some relevant links:
* [[Stompy To-Do List]]
* [[Stompy Build Guide]]
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
= Simulation =
For the latest simulation artifacts, see [https://kscale.dev/ the website].
= Artwork =
Here's some art of Stompy!
<gallery>
Stompy 1.png
Stompy 2.png
Stompy 3.png
Stompy 4.png
</gallery>
[[Category:Robots]]
[[Category:Open Source]]
[[Category:K-Scale]]
7cff241cdb3456cd93e8af7602554cb0e22b7f18
MIT Cheetah
0
158
983
669
2024-05-11T00:27:38Z
Hatsmagee
26
buit -> bit
wikitext
text/x-wiki
'''MIT Cheetah''' is an open-source quadropedal robot designed using low-inertia actuators. This robot is developed in the lab of Professor Sangbae Kim at MIT<ref>https://www.csail.mit.edu/news/one-giant-leap-mini-cheetah</ref>. The MIT Cheetah is known for its agility and ability to adapt to varying terrain conditions without requiring a terrain map in advance
{{infobox robot
| name = MIT Cheetah
| organization = MIT
| video_link =
| cost =
| height =
| weight = 20 pounds
| speed = Twice the speed of average human walking speed<ref>https://news.mit.edu/2019/mit-mini-cheetah-first-four-legged-robot-to-backflip-0304</ref>,<ref>http://robotics.mit.edu/mini-cheetah-first-four-legged-robot-do-backflip</ref>
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof =
| status = Active
}}
== Design and Development ==
The MIT Cheetah displays remarkable adaptability in its functions. Despite being only 20 pounds, the robot can bend and swing its legs wide, allowing it to either walk right side up or upside down<ref>https://news.mit.edu/2019/mit-mini-cheetah-first-four-legged-robot-to-backflip-0304</ref>. This flexibility is a result of its design that prioritizes a wide range of motion.
The robot is further equipped to deal with challenges posed by rough, uneven terrain. It can traverse such landscapes at a pace twice as fast as an average person's walking speed<ref>http://robotics.mit.edu/mini-cheetah-first-four-legged-robot-do-backflip</ref>. The MIT Cheetah’s design, particularly the implementation of a control system that enables agile running, has been largely driven by the "learn-by-experience model". This approach, in contrast to previous designs reliant primarily on human analytical insights, allows the robot to respond quickly to changes in the environment<ref>https://news.mit.edu/2022/3-questions-how-mit-mini-cheetah-learns-run-fast-0317</ref>.
=== Chips ===
* [https://www.st.com/en/microcontrollers-microprocessors/stm32-32-bit-arm-cortex-mcus.html STM32 32-bit Arm Cortex MCU]
* [https://www.ti.com/product/DRV8323 DRV8323 3-phase smart gate driver]
* [https://www.monolithicpower.com/en/ma702.html MA702 angular position measurement device]
* [https://www.microchip.com/en-us/product/mcp2515 MCP2515 CAN Controller with SPI Interface]
== References ==
<references />
[[Category:Actuators]]
[[Category:Open Source]]
[[Category:Robots]]
c48be87863752a37917a4aa2359e11bc9936780e
Humanoid Robots Wiki talk:About
5
230
984
2024-05-12T05:39:17Z
204.15.110.167
0
Created page with "Winston here from Iowa. I'm always watching to see what newer sites are going up and I just wanted to see if you would like an extra hand with getting some targeted traffic, C..."
wikitext
text/x-wiki
Winston here from Iowa. I'm always watching to see what newer sites are going up and I just wanted to see if you would like an extra hand with getting some targeted traffic, Create custom AI bots to answer questions from visitors on your site or walk them through a sales process, create videos/images/adcopy, remove negative listings, the list goes on. I'll even shoulder 90% of the costs, dedicating my time and tools that I've created myself and bought over the years. I've been doing this for over 22 years, helped thousands of people and have loved every minute of it.
There's virtually no cost on my end to do any of this for you except for my time starting at 99 a month. I don't mean to impose; I was just curious if I could lend a hand.
Brief history, I've been working from home for a couple decades now and I love helping others. I'm married, have three girls and if I can provide for them by helping you and giving back by using the tools and knowledge I've built and learned over the years, I can't think of a better win-win.
It amazes me that no one else is helping others quite like I do and I'd love to show you how I can help out. So, if you need any extra help in any way, please let me know either way as I value your time and don't want to pester you.
PS – If I didn’t mention something you might need help with just ask, I only mentioned a handful of things to keep this brief :-)
All the best,
Winston
Cell - 1-319-435-1790
My Site (w/Live Chat) - https://cutt.ly/ww91SRIU
82a02209235463885955022c253b0960a5ae40f9
Controller Area Network (CAN)
0
155
985
772
2024-05-12T22:51:17Z
Ben
2
wikitext
text/x-wiki
The '''Controller Area Network''' (CAN) represents an essential vehicle bus standard. It was explicitly engineered to facilitate communication between microcontrollers and devices without resorting to a central computer. The primary purpose of CAN is to enhance vehicle interconnectivity and foster swift data exchange between various systems within the vehicle.
== MCP2515 ==
The '''MCP2515''' is an integrated circuit produced by Microchip Technology, constructed to function as a stand-alone CAN controller. It exhibits compatibility with an SPI (Serial Peripheral Interface) which makes it versatile in various applications. Although its primary use is in automotive industries, it is also used in a variety of other control applications.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515 bridges the connection between the CAN protocol and the SPI protocol by receiving CAN messages and translating them into SPI data, allowing the microcontroller to interpret the information. Similarly, it transforms SPI data into CAN messages for transmission.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515's operational details include its capacity to support CAN 2.0A and B, drawing attention to its alignment with established CAN standards and equipping it for both basic and extended frame format usage.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
== Applications ==
The [https://python-can.readthedocs.io/en/stable/ python-can] library provides Controller Area Network support for Python, providing common abstractions to different hardware devices, and a suite of utilities for sending and receiving messages on a CAN bus.
There is an example of an installation on [[Jetson_Orin]] here.
=== Pi4 ===
Raspberry Pi offers an easy to deploy 2-Channel Isolated CAN Bus Expansion HAT which allows to quickly integrate it to the peripheral devices. See the [https://www.waveshare.com/wiki/2-CH_CAN_HAT tutorial] for more information
=== Arduino ===
Arduino has a good support of the MCPs with many implementations of the [https://github.com/Seeed-Studio/Seeed_Arduino_CAN drivers]
=== MCP2515 Driver ===
By default the CAN bus node is supposed to acknowledge every message on the bus weather or not that node is interested in the message. However, the interference on the network can drop some bits during the communication. In the standard mode, the node would not only continuously try to re-send the unacknowledged messages, but also after a short period it would start sending error frames and then eventually go to bus-off mode and stop. This causes sever issues when the CAN network works with multiple motors.
The controller has a [http://ww1.microchip.com/downloads/en/DeviceDoc/MCP2515-Stand-Alone-CAN-Controller-with-SPI-20001801J.pdf one-shot] setup that requires changes in the driver.
=== Wiring ===
Here is some suggested equipment for wiring a CAN bus:
* Molex Mini-Fit Junior connectors
** [https://www.digikey.ca/en/products/detail/molex/0638190901/9655931 Crimper]
** [https://www.digikey.ca/en/products/detail/molex/0766500013/2115996 Connector kit]
** [https://www.aliexpress.us/item/3256805730106963.html Extraction tool]
* CAN bus cable
** [https://www.digikey.ca/en/products/detail/igus/CF211-02-01-02/18724291 Cable]
** [https://www.digikey.com/en/products/detail/igus/CF891-07-02/21280679 Alternative Cable]
* Heat shrink
** [https://www.amazon.com/Eventronic-Heat-Shrink-Tubing-Kit-3/dp/B0BVVMCY86 Tubing]
== References ==
<references />
[[Category:Communication]]
4bd94ec196d10b06c13a0ca00c59a09ae35b4bcf
Stereo Vision
0
231
986
2024-05-13T01:39:23Z
Vrtnis
21
Created page with "This is a guide for setting up and experimenting with stereo cameras in your projects. This guide is incomplete and a work in progress; you can help by expanding it! == Choo..."
wikitext
text/x-wiki
This is a guide for setting up and experimenting with stereo cameras in your projects.
This guide is incomplete and a work in progress; you can help by expanding it!
== Choosing the Right Stereo Camera ==
In the realm of computer vision, selecting an appropriate stereo camera is fundamental. Considerations such as resolution, compatibility, and specific features like Image Signal Processing (ISP) support are paramount. For example, the [https://www.arducam.com/product/arducam-pivariety-18mp-ar1820hs-color-camera-module-for-rpi-cam-b0367/ Arducam Pivariety 18MP AR1820HS camera module] offers high resolution and is compatible with Raspberry Pi models, featuring auto exposure, auto white balance, and lens shading that are crucial for capturing high-quality images under varying lighting conditions.
== Implementation and Testing ==
Setting up and testing stereo cameras can vary based on the project's needs. For example, streaming from a USB stereo camera to a VR headset like the Quest Pro involves addressing challenges such as latency and the processing of hand tracking data. Utilizing resources like the [https://github.com/OpenTeleVision/TeleVision TeleVision GitHub repository] can be invaluable for developers aiming to stream camera feeds efficiently, crucial for applications requiring real-time data such as virtual reality or remote operation environments.
== Application Scenarios ==
Stereo cameras are versatile and can be adapted for numerous applications. For instance, one setup might utilize long cables for full-room scale monitoring, another for 360-degree local vision, and a third for specific stereo vision tasks. These configurations cater to the unique requirements of each application, whether it’s monitoring large spaces or creating immersive user experiences.
== Computational Considerations ==
When deploying stereo cameras, considering the computational load is crucial. Processing two raw images from stereo pairs might seem redundant, especially if the images are similar. Techniques like using CLIP-like models for encoding can reduce the need for processing both images in depth, as these models can intuit depth from high-level semantic content, thus conserving computational resources.
== Exploring Depth Sensing Techniques ==
Depth sensing in stereo cameras can be achieved through various technologies. While some utilize stereo disparity, others may incorporate structured light sensors for depth detection. Understanding the underlying technology is essential for optimizing the setup and ensuring efficient processing, as seen in RealSense cameras, which combine structured light sensing with stereo disparity to provide robust depth information without significant additional computational demands.
879eb32ce519bd05eee39767b76cdb7988518d8b
987
986
2024-05-13T01:47:35Z
Vrtnis
21
wikitext
text/x-wiki
This is a guide for setting up and experimenting with stereo cameras in your projects.
This guide is incomplete and a work in progress; you can help by expanding it!
== Choosing the Right Stereo Camera ==
In the realm of computer vision, selecting an appropriate stereo camera is fundamental.
Considerations such as resolution, compatibility, and specific features like Image Signal Processing (ISP) support are paramount.
For example, the [https://www.arducam.com/product/arducam-pivariety-18mp-ar1820hs-color-camera-module-for-rpi-cam-b0367/ Arducam Pivariety 18MP AR1820HS camera module] offers high resolution and is compatible with Raspberry Pi models, featuring auto exposure, auto white balance, and lens shading that are crucial for capturing high-quality images under varying lighting conditions.
== Implementation and Testing ==
Setting up and testing stereo cameras can vary based on the project's needs. For example, streaming from a USB stereo camera to a VR headset like the Quest Pro involves addressing challenges such as latency and the processing of hand tracking data.
Utilizing resources like the [https://github.com/OpenTeleVision/TeleVision TeleVision GitHub repository] can be invaluable for developers aiming to stream camera feeds efficiently, crucial for applications requiring real-time data such as virtual reality or remote operation environments.
== Application Scenarios ==
Stereo cameras are versatile and can be adapted for numerous applications. For instance, one setup might utilize long cables for full-room scale monitoring, another for 360-degree local vision, and a third for specific stereo vision tasks.
These configurations cater to the unique requirements of each application, whether it’s monitoring large spaces or creating immersive user experiences.
== Computational Considerations ==
When deploying stereo cameras, considering the computational load is crucial. Processing two raw images from stereo pairs might seem redundant, especially if the images are similar.
Techniques like using CLIP-like models for encoding can reduce the need for processing both images in depth, as these models can intuit depth from high-level semantic content, thus conserving computational resources.
== Exploring Depth Sensing Techniques ==
Depth sensing in stereo cameras can be achieved through various technologies. While some utilize stereo disparity, others may incorporate structured light sensors for depth detection. Understanding the underlying technology is essential for optimizing the setup and ensuring efficient processing, as seen in RealSense cameras, which combine structured light sensing with stereo disparity to provide robust depth information without significant additional computational demands.
== Community Insights on Stereo Cameras ==
The community has shared varied experiences and recommendations on stereo cameras, emphasizing the practical use and applications of different models. Notably, Intel RealSense cameras seem popular among users for their robust software and integration with ROS.
Despite some criticisms about the small baseline of the RealSense cameras, which can limit depth perception, alternatives like the Oak-D camera and Arducam's stereo cameras have been suggested for different needs. Oak-D is praised for its edge computing capabilities and high-quality image processing, while Arducam offers affordable options for Raspberry Pi and NVIDIA platforms.
Additionally, advanced users have discussed using the ZED2 camera for its superior baseline and resolution, and have compared various models for specific needs, such as indoor testing and 3D benchmarking with Kinect and Orbbec Astra cameras. The community also highlights the importance of considering off-the-shelf depth cameras that integrate stereo computation internally to save on development time and effort.
This section reflects ongoing discussions and is open for further contributions and updates from the community.
e2486ca6819c8519ad48dee8b390874de57f1503
988
987
2024-05-13T01:48:17Z
Vrtnis
21
wikitext
text/x-wiki
This is a guide for setting up and experimenting with stereo cameras in your projects.
This guide is incomplete and a work in progress; you can help by expanding it!
== Choosing the Right Stereo Camera ==
In the realm of computer vision, selecting an appropriate stereo camera is fundamental.
Considerations such as resolution, compatibility, and specific features like Image Signal Processing (ISP) support are paramount.
For example, the [https://www.arducam.com/product/arducam-pivariety-18mp-ar1820hs-color-camera-module-for-rpi-cam-b0367/ Arducam Pivariety 18MP AR1820HS camera module] offers high resolution and is compatible with Raspberry Pi models, featuring auto exposure, auto white balance, and lens shading that are crucial for capturing high-quality images under varying lighting conditions.
== Implementation and Testing ==
Setting up and testing stereo cameras can vary based on the project's needs. For example, streaming from a USB stereo camera to a VR headset like the Quest Pro involves addressing challenges such as latency and the processing of hand tracking data.
Utilizing resources like the [https://github.com/OpenTeleVision/TeleVision TeleVision GitHub repository] can be invaluable for developers aiming to stream camera feeds efficiently, crucial for applications requiring real-time data such as virtual reality or remote operation environments.
== Application Scenarios ==
Stereo cameras are versatile and can be adapted for numerous applications. For instance, one setup might utilize long cables for full-room scale monitoring, another for 360-degree local vision, and a third for specific stereo vision tasks.
These configurations cater to the unique requirements of each application, whether it’s monitoring large spaces or creating immersive user experiences.
== Computational Considerations ==
When deploying stereo cameras, considering the computational load is crucial. Processing two raw images from stereo pairs might seem redundant, especially if the images are similar.
Techniques like using CLIP-like models for encoding can reduce the need for processing both images in depth, as these models can intuit depth from high-level semantic content, thus conserving computational resources.
== Exploring Depth Sensing Techniques ==
Depth sensing in stereo cameras can be achieved through various technologies. While some utilize stereo disparity, others may incorporate structured light sensors for depth detection.
Understanding the underlying technology is essential for optimizing the setup and ensuring efficient processing, as seen in RealSense cameras, which combine structured light sensing with stereo disparity to provide robust depth information without significant additional computational demands.
== Community Insights on Stereo Cameras ==
The community has shared varied experiences and recommendations on stereo cameras, emphasizing the practical use and applications of different models. Notably, Intel RealSense cameras seem popular among users for their robust software and integration with ROS.
Despite some criticisms about the small baseline of the RealSense cameras, which can limit depth perception, alternatives like the Oak-D camera and Arducam's stereo cameras have been suggested for different needs. Oak-D is praised for its edge computing capabilities and high-quality image processing, while Arducam offers affordable options for Raspberry Pi and NVIDIA platforms.
Additionally, advanced users have discussed using the ZED2 camera for its superior baseline and resolution, and have compared various models for specific needs, such as indoor testing and 3D benchmarking with Kinect and Orbbec Astra cameras. The community also highlights the importance of considering off-the-shelf depth cameras that integrate stereo computation internally to save on development time and effort.
This section reflects ongoing discussions and is open for further contributions and updates from the community.
13033588ddc4fac7dc5e1f78a4bca591f60fdc51
989
988
2024-05-13T01:51:34Z
Vrtnis
21
wikitext
text/x-wiki
This is a guide for setting up and experimenting with stereo cameras in your projects.
This guide is incomplete and a work in progress; you can help by expanding it!
== Choosing the Right Stereo Camera ==
In the realm of computer vision, selecting an appropriate stereo camera is fundamental.
Considerations such as resolution, compatibility, and specific features like Image Signal Processing (ISP) support are paramount.
For example, the [https://www.arducam.com/product/arducam-pivariety-18mp-ar1820hs-color-camera-module-for-rpi-cam-b0367/ Arducam Pivariety 18MP AR1820HS camera module] offers high resolution and is compatible with Raspberry Pi models, featuring auto exposure, auto white balance, and lens shading that are crucial for capturing high-quality images under varying lighting conditions.
== Implementation and Testing ==
Setting up and testing stereo cameras can vary based on the project's needs. For example, streaming from a USB stereo camera to a VR headset like the Quest Pro involves addressing challenges such as latency and the processing of hand tracking data.
Utilizing resources like the [https://github.com/OpenTeleVision/TeleVision TeleVision GitHub repository] can be invaluable for developers aiming to stream camera feeds efficiently, crucial for applications requiring real-time data such as virtual reality or remote operation environments.
== Application Scenarios ==
Stereo cameras are versatile and can be adapted for numerous applications. For instance, one setup might utilize long cables for full-room scale monitoring, another for 360-degree local vision, and a third for specific stereo vision tasks.
These configurations cater to the unique requirements of each application, whether it’s monitoring large spaces or creating immersive user experiences.
== Computational Considerations ==
When deploying stereo cameras, considering the computational load is crucial. Processing two raw images from stereo pairs might seem redundant, especially if the images are similar.
Techniques like using CLIP-like models for encoding can reduce the need for processing both images in depth, as these models can intuit depth from high-level semantic content, thus conserving computational resources.
== Exploring Depth Sensing Techniques ==
Depth sensing in stereo cameras can be achieved through various technologies. While some utilize stereo disparity, others may incorporate structured light sensors for depth detection.
Understanding the underlying technology is essential for optimizing the setup and ensuring efficient processing, as seen in RealSense cameras, which combine structured light sensing with stereo disparity to provide robust depth information without significant additional computational demands.
== Community Insights on Stereo Cameras ==
The community has shared varied experiences and recommendations on stereo cameras, emphasizing the practical use and applications of different models. Notably, Intel RealSense cameras seem popular among users for their robust software and integration with ROS.
Despite some criticisms about the small baseline of the RealSense cameras, which can limit depth perception, alternatives like the Oak-D camera and Arducam's stereo cameras have been suggested for different needs. Oak-D is praised for its edge computing capabilities and high-quality image processing, while Arducam offers affordable options for Raspberry Pi and NVIDIA platforms.
Additionally, advanced users have discussed using the ZED2 camera for its superior baseline and resolution, and have compared various models for specific needs, such as indoor testing and 3D benchmarking with Kinect and Orbbec Astra cameras. The community also highlights the importance of considering off-the-shelf depth cameras that integrate stereo computation internally to save on development time and effort.
Users have shared their experiences with various depth-sensing technologies across different robotic platforms. For example, the Unitree robotics platforms and the mini-cheetah from MIT have incorporated RealSense cameras for mapping and environmental interaction.
The broader field of view afforded by using multiple cameras, as seen with Spot which employs five cameras, is advantageous for comprehensive environmental awareness. However, for specific research applications where movement is predominantly forward, a single camera might suffice.
The discussion also highlights the shift towards more sophisticated sensing technologies like solid-state lidars, though cost and weight remain significant considerations. For instance, the CEREBERUS team from the SubTChallenge noted that while lidars provide more efficient depth data than multiple depth cameras, their higher cost and weight could be limiting factors depending on the robotic platform.
This section reflects ongoing discussions and is open for further contributions and updates from the community.
References:
[1] https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9196777
[2] https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8642374
[3] https://arxiv.org/pdf/2201.07067.pdf
f0f541ca4793768f23710895a39d366454e7c7f5
990
989
2024-05-13T01:52:34Z
Vrtnis
21
wikitext
text/x-wiki
This is a guide for setting up and experimenting with stereo cameras in your projects.
This guide is incomplete and a work in progress; you can help by expanding it!
== Choosing the Right Stereo Camera ==
In the realm of computer vision, selecting an appropriate stereo camera is fundamental.
Considerations such as resolution, compatibility, and specific features like Image Signal Processing (ISP) support are paramount.
For example, the [https://www.arducam.com/product/arducam-pivariety-18mp-ar1820hs-color-camera-module-for-rpi-cam-b0367/ Arducam Pivariety 18MP AR1820HS camera module] offers high resolution and is compatible with Raspberry Pi models, featuring auto exposure, auto white balance, and lens shading that are crucial for capturing high-quality images under varying lighting conditions.
== Implementation and Testing ==
Setting up and testing stereo cameras can vary based on the project's needs. For example, streaming from a USB stereo camera to a VR headset like the Quest Pro involves addressing challenges such as latency and the processing of hand tracking data.
Utilizing resources like the [https://github.com/OpenTeleVision/TeleVision TeleVision GitHub repository] can be invaluable for developers aiming to stream camera feeds efficiently, crucial for applications requiring real-time data such as virtual reality or remote operation environments.
== Application Scenarios ==
Stereo cameras are versatile and can be adapted for numerous applications. For instance, one setup might utilize long cables for full-room scale monitoring, another for 360-degree local vision, and a third for specific stereo vision tasks.
These configurations cater to the unique requirements of each application, whether it’s monitoring large spaces or creating immersive user experiences.
== Computational Considerations ==
When deploying stereo cameras, considering the computational load is crucial. Processing two raw images from stereo pairs might seem redundant, especially if the images are similar.
Techniques like using CLIP-like models for encoding can reduce the need for processing both images in depth, as these models can intuit depth from high-level semantic content, thus conserving computational resources.
== Exploring Depth Sensing Techniques ==
Depth sensing in stereo cameras can be achieved through various technologies. While some utilize stereo disparity, others may incorporate structured light sensors for depth detection.
Understanding the underlying technology is essential for optimizing the setup and ensuring efficient processing, as seen in RealSense cameras, which combine structured light sensing with stereo disparity to provide robust depth information without significant additional computational demands.
== Community Insights on Stereo Cameras ==
The community has shared varied experiences and recommendations on stereo cameras, emphasizing the practical use and applications of different models. Notably, Intel RealSense cameras seem popular among users for their robust software and integration with ROS.
Despite some criticisms about the small baseline of the RealSense cameras, which can limit depth perception, alternatives like the Oak-D camera and Arducam's stereo cameras have been suggested for different needs. Oak-D is praised for its edge computing capabilities and high-quality image processing, while Arducam offers affordable options for Raspberry Pi and NVIDIA platforms.
Additionally, advanced users have discussed using the ZED2 camera for its superior baseline and resolution, and have compared various models for specific needs, such as indoor testing and 3D benchmarking with Kinect and Orbbec Astra cameras. The community also highlights the importance of considering off-the-shelf depth cameras that integrate stereo computation internally to save on development time and effort.
Users have shared their experiences with various depth-sensing technologies across different robotic platforms. For example, the Unitree robotics platforms and the mini-cheetah from MIT have incorporated RealSense cameras for mapping and environmental interaction. The broader field of view afforded by using multiple cameras, as seen with Spot which employs five cameras, is advantageous for comprehensive environmental awareness. However, for specific research applications where movement is predominantly forward, a single camera might suffice. The discussion also highlights the shift towards more sophisticated sensing technologies like solid-state lidars, though cost and weight remain significant considerations. For instance, the CEREBERUS team from the SubTChallenge noted that while lidars provide more efficient depth data than multiple depth cameras, their higher cost and weight could be limiting factors depending on the robotic platform.
This section reflects ongoing discussions and is open for further contributions and updates from the community.
References:
[1] https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9196777
[2] https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8642374
[3] https://arxiv.org/pdf/2201.07067.pdf
bd1f55b2f6d2c9cfac54aef19eb210921c5a6000
991
990
2024-05-13T01:53:44Z
Vrtnis
21
wikitext
text/x-wiki
This is a guide for setting up and experimenting with stereo cameras in your projects.
This guide is incomplete and a work in progress; you can help by expanding it!
== Choosing the Right Stereo Camera ==
In the realm of computer vision, selecting an appropriate stereo camera is fundamental.
Considerations such as resolution, compatibility, and specific features like Image Signal Processing (ISP) support are paramount.
For example, the [https://www.arducam.com/product/arducam-pivariety-18mp-ar1820hs-color-camera-module-for-rpi-cam-b0367/ Arducam Pivariety 18MP AR1820HS camera module] offers high resolution and is compatible with Raspberry Pi models, featuring auto exposure, auto white balance, and lens shading that are crucial for capturing high-quality images under varying lighting conditions.
== Implementation and Testing ==
Setting up and testing stereo cameras can vary based on the project's needs. For example, streaming from a USB stereo camera to a VR headset like the Quest Pro involves addressing challenges such as latency and the processing of hand tracking data.
Utilizing resources like the [https://github.com/OpenTeleVision/TeleVision TeleVision GitHub repository] can be invaluable for developers aiming to stream camera feeds efficiently, crucial for applications requiring real-time data such as virtual reality or remote operation environments.
== Application Scenarios ==
Stereo cameras are versatile and can be adapted for numerous applications. For instance, one setup might utilize long cables for full-room scale monitoring, another for 360-degree local vision, and a third for specific stereo vision tasks.
These configurations cater to the unique requirements of each application, whether it’s monitoring large spaces or creating immersive user experiences.
== Computational Considerations ==
When deploying stereo cameras, considering the computational load is crucial. Processing two raw images from stereo pairs might seem redundant, especially if the images are similar.
Techniques like using CLIP-like models for encoding can reduce the need for processing both images in depth, as these models can intuit depth from high-level semantic content, thus conserving computational resources.
== Exploring Depth Sensing Techniques ==
Depth sensing in stereo cameras can be achieved through various technologies. While some utilize stereo disparity, others may incorporate structured light sensors for depth detection.
Understanding the underlying technology is essential for optimizing the setup and ensuring efficient processing, as seen in RealSense cameras, which combine structured light sensing with stereo disparity to provide robust depth information without significant additional computational demands.
== Community Insights on Stereo Cameras ==
The community has shared varied experiences and recommendations on stereo cameras, emphasizing the practical use and applications of different models. Notably, Intel RealSense cameras seem popular among users for their robust software and integration with ROS.
Despite some criticisms about the small baseline of the RealSense cameras, which can limit depth perception, alternatives like the Oak-D camera and Arducam's stereo cameras have been suggested for different needs. Oak-D is praised for its edge computing capabilities and high-quality image processing, while Arducam offers affordable options for Raspberry Pi and NVIDIA platforms.
Additionally, advanced users have discussed using the ZED2 camera for its superior baseline and resolution, and have compared various models for specific needs, such as indoor testing and 3D benchmarking with Kinect and Orbbec Astra cameras. The community also highlights the importance of considering off-the-shelf depth cameras that integrate stereo computation internally to save on development time and effort.
Users have shared their experiences with various depth-sensing technologies across different robotic platforms. For example, the Unitree robotics platforms and the mini-cheetah from MIT have incorporated RealSense cameras for mapping and environmental interaction. The broader field of view afforded by using multiple cameras, as seen with Spot which employs five cameras, is advantageous for comprehensive environmental awareness. However, for specific research applications where movement is predominantly forward, a single camera might suffice. Builders in the robotics community also highlight the shift towards more sophisticated sensing technologies like solid-state lidars, though cost and weight remain significant considerations. For instance, the CEREBERUS team from the SubTChallenge noted that while lidars provide more efficient depth data than multiple depth cameras, their higher cost and weight could be limiting factors depending on the robotic platform.
This section reflects ongoing discussions and is open for further contributions and updates from the community.
References:
[1] https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9196777
[2] https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8642374
[3] https://arxiv.org/pdf/2201.07067.pdf
197e934b79d6dbc0df8a55cf483d470d1b63180e
User:Dennisc
2
232
992
2024-05-13T02:49:16Z
Dennisc
27
Created page with "Dennis is a technical intern and a 2nd-year CS/Math student at CMU. {{infobox person | name = Dennis Chen | organization = [[K-Scale Labs]] | title = Technical Intern | websi..."
wikitext
text/x-wiki
Dennis is a technical intern and a 2nd-year CS/Math student at CMU.
{{infobox person
| name = Dennis Chen
| organization = [[K-Scale Labs]]
| title = Technical Intern
| website_link = https://andrew.cmu.edu/~dennisc2
}}
[[Category: K-Scale Employees]]
674a6d4feb9f7b1a0d2fb07414752de4ceb48c74
G1
0
233
997
2024-05-13T07:32:04Z
Ben
2
Created page with "The G1 humanoid robot is an upcoming humanoid robot from [[Unitree]]. {{infobox robot | name = G1 | organization = [[Unitree]] | video_link = https://mp.weixin.qq.com/s/RGNVR..."
wikitext
text/x-wiki
The G1 humanoid robot is an upcoming humanoid robot from [[Unitree]].
{{infobox robot
| name = G1
| organization = [[Unitree]]
| video_link = https://mp.weixin.qq.com/s/RGNVRazZqDn3y_Ijemc5Kw
| cost = RMB 99,000
| height = 127 cm
| weight = 35 kg
| speed =
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof =
| status =
}}
8e508d43852ccba65735ef68c91327fe81ec8a21
K-Scale Intern Onboarding
0
139
999
722
2024-05-13T17:14:32Z
Ben
2
wikitext
text/x-wiki
Congratulations on your internship at K-Scale Labs! We are excited for you to join us.
=== Onboarding ===
* Watch out for an email from Gusto (our HR software), with an official offer letter and instructions on how to onboard you into our system.
* Once you accept, Ben will add you to the system, after which you will have to enter your bank account information in order to be paid.
=== Pre-Internship Checklist ===
* Create a wiki account and mark yourself as an employee (you can use [[User:Ben]] as a template). You'll use your account as the main way to keep track of what you've done over the course of the internship.
* Contribute an article about something you find interesting. See the [[Contributing]] guide.
=== What To Bring ===
* Bring living things (cloting, toothbrush, eyc.)
* For your first day, you will need documents for your I9 authorizing you to work with us. The easiest is to just bring your passport or passport card. Alternatively, you'll need your driver's license or a federal photo ID, AND your social security card or birth certificate.
=== Additional Notes ===
* For travel expenses, please purchase your own flight and keep your receipts so that we can reimburse you later.
[[Category:K-Scale]]
28a762addcd0196bffcb795f8f5d1fdcdbc798b3
LiDAR
0
234
1000
2024-05-13T17:49:18Z
Vrtnis
21
Created page with "This guide provides insights and recommendations for choosing cost-effective 3D LiDAR systems for various applications. ==== Unitree L1 ==== This model is one of the most aff..."
wikitext
text/x-wiki
This guide provides insights and recommendations for choosing cost-effective 3D LiDAR systems for various applications.
==== Unitree L1 ====
This model is one of the most affordable 3D LiDAR systems that might fit the budget constraints of students and hobbyists. While not perfect, it offers basic functionality needed for entry-level 3D scanning tasks.
==== Stereoscopic Cameras ====
An alternative to traditional LiDAR systems, stereoscopic cameras use dual lenses to capture spatial data, providing a 3D perception at a potentially lower cost. This technology can be a viable option for those unable to afford dedicated 3D LiDAR systems.
=== Understanding LiDAR Technologies ===
==== Regular 2D LiDAR ====
These devices create a 2D plane of points by rotating around a vertical axis. They are generally more affordable and can offer a high sample rate, making them suitable for applications requiring planar data.
==== Stacked 2D LiDARs (Multi-layer 3D LiDAR) ====
Often marketed as 3D LiDAR, these are essentially multiple 2D LiDARs arranged vertically at different angles, rotating around the same axis. They tend to be expensive and do not produce a dense 3D point cloud unless moved in space, making them less ideal for static applications.
==== 3D Scanners ====
These systems use a 2D LiDAR tilted 90 degrees, combined with a secondary mechanism that rotates the device around its longitudinal axis. While they generate detailed 3D point clouds, they do so slowly and are not suited for real-time applications.
=== Budget-Friendly Alternatives and DIY Solutions ===
==== Smartphone Depth Sensors ====
For those with access to newer smartphones, utilizing built-in depth sensors can be a cost-effective way to gather 3D data. These devices, like the iPhone's TrueDepth camera, can generate usable 2D depth maps or 3D point clouds.
==== DIY 3D Scanners ====
Constructing a 3D scanner from components such as a 2D LiDAR, a Raspberry Pi, and a camera can be an educational and affordable project. This approach requires more technical expertise but allows for customization and potentially lower costs.
Choosing the right LiDAR system depends on the specific needs of the project and the available budget. For projects where real-time data is not critical, alternatives like stereoscopic cameras or DIY solutions might be adequate. However, for more demanding applications, investing in a higher-quality 3D LiDAR may be necessary.
e016c33fb7af30f9439dc172af8948408ff03dbe
1002
1000
2024-05-13T17:52:13Z
Vrtnis
21
wikitext
text/x-wiki
This guide provides insights and recommendations for choosing cost-effective 3D LiDAR systems for various applications.
This guide is incomplete and a work in progress; you can help by expanding it!
==== Unitree L1 ====
This model is one of the most affordable 3D LiDAR systems that might fit the budget constraints of students and hobbyists. While not perfect, it offers basic functionality needed for entry-level 3D scanning tasks.
==== Stereoscopic Cameras ====
An alternative to traditional LiDAR systems, stereoscopic cameras use dual lenses to capture spatial data, providing a 3D perception at a potentially lower cost. This technology can be a viable option for those unable to afford dedicated 3D LiDAR systems.
=== Understanding LiDAR Technologies ===
==== Regular 2D LiDAR ====
These devices create a 2D plane of points by rotating around a vertical axis. They are generally more affordable and can offer a high sample rate, making them suitable for applications requiring planar data.
==== Stacked 2D LiDARs (Multi-layer 3D LiDAR) ====
Often marketed as 3D LiDAR, these are essentially multiple 2D LiDARs arranged vertically at different angles, rotating around the same axis. They tend to be expensive and do not produce a dense 3D point cloud unless moved in space, making them less ideal for static applications.
==== 3D Scanners ====
These systems use a 2D LiDAR tilted 90 degrees, combined with a secondary mechanism that rotates the device around its longitudinal axis. While they generate detailed 3D point clouds, they do so slowly and are not suited for real-time applications.
=== Budget-Friendly Alternatives and DIY Solutions ===
==== Smartphone Depth Sensors ====
For those with access to newer smartphones, utilizing built-in depth sensors can be a cost-effective way to gather 3D data. These devices, like the iPhone's TrueDepth camera, can generate usable 2D depth maps or 3D point clouds.
==== DIY 3D Scanners ====
Constructing a 3D scanner from components such as a 2D LiDAR, a Raspberry Pi, and a camera can be an educational and affordable project. This approach requires more technical expertise but allows for customization and potentially lower costs.
Choosing the right LiDAR system depends on the specific needs of the project and the available budget. For projects where real-time data is not critical, alternatives like stereoscopic cameras or DIY solutions might be adequate. However, for more demanding applications, investing in a higher-quality 3D LiDAR may be necessary.
dfbb71c6b56c886ad9850df12c55d1e20c0d1170
K-Scale CANdaddy
0
235
1001
2024-05-13T17:49:31Z
Matt
16
add candaddy page
wikitext
text/x-wiki
CANdaddy info here
89d9bfea9212efe08992b78609fc3e6b68f18936
Haptic Technology
0
236
1003
2024-05-13T18:13:22Z
Vrtnis
21
Created page with " Haptic technology in robotics has transformed the landscape of remote operations, offering a more nuanced and precise control that mimics human touch. This technological adv..."
wikitext
text/x-wiki
Haptic technology in robotics has transformed the landscape of remote operations, offering a more nuanced and precise control that mimics human touch. This technological advancement allows operators to control robots from afar, feeling what the robots feel as if they were physically present, which dramatically improves the operator's ability to perform complex tasks.
eb71bef2b8ae56965544398944d28afe2b853568
1004
1003
2024-05-13T18:13:57Z
Vrtnis
21
wikitext
text/x-wiki
This guide is incomplete and a work in progress; you can help by expanding it!
Haptic technology in robotics has transformed the landscape of remote operations, offering a more nuanced and precise control that mimics human touch. This technological advancement allows operators to control robots from afar, feeling what the robots feel as if they were physically present, which dramatically improves the operator's ability to perform complex tasks.
e44ec013de73a3c5766137efbb1a7f216ef1999c
1005
1004
2024-05-13T18:52:04Z
Vrtnis
21
wikitext
text/x-wiki
This guide is incomplete and a work in progress; you can help by expanding it!
Haptic technology in robotics has transformed the landscape of remote operations, offering a more nuanced and precise control that mimics human touch. This technological advancement allows operators to control robots from afar, feeling what the robots feel as if they were physically present, which dramatically improves the operator's ability to perform complex tasks.
* '''HaptX Gloves and the Tactile Telerobot''': Developed in collaboration with the Converge Robotics Group, the HaptX Gloves are part of the innovative Tactile Telerobot system. This setup enables precise, intuitive control over robotic hands from thousands of miles away, with the gloves providing real-time tactile sensations that simulate actual contact with various objects.
.
===Improving Remote Interaction ===
The haptic gloves are designed to reduce latency and increase the accuracy of feedback, making remote operations feel more immediate and intuitive. For example, a notable test involved using these gloves to type a message on a keyboard situated over 5,000 miles away, demonstrating the critical role of tactile feedback in performing everyday tasks remotely.
d12ea3ab97d7a1ddd2a6a25625bba2c52bc92f85
1006
1005
2024-05-13T18:54:09Z
Vrtnis
21
wikitext
text/x-wiki
This guide is incomplete and a work in progress; you can help by expanding it!
Haptic technology in robotics has transformed the landscape of remote operations, offering a more nuanced and precise control that mimics human touch. This technological advancement allows operators to control robots from afar, feeling what the robots feel as if they were physically present, which dramatically improves the operator's ability to perform complex tasks.
=== HaptX Gloves and the Tactile Telerobot ===
Developed in collaboration with the Converge Robotics Group, the HaptX Gloves are part of the innovative Tactile Telerobot system. This setup enables precise, intuitive control over robotic hands from thousands of miles away, with the gloves providing real-time tactile sensations that simulate actual contact with various objects.
===Improving Remote Interaction ===
The haptic gloves are designed to reduce latency and increase the accuracy of feedback, making remote operations feel more immediate and intuitive. For example, a notable test involved using these gloves to type a message on a keyboard situated over 5,000 miles away, demonstrating the critical role of tactile feedback in performing everyday tasks remotely.
017fd6cba94260e8b065f75a42a9b311c3664f08
1007
1006
2024-05-13T18:57:52Z
Vrtnis
21
wikitext
text/x-wiki
This guide is incomplete and a work in progress; you can help by expanding it!
__TOC__
Haptic technology in robotics has transformed the landscape of remote operations, offering a more nuanced and precise control that mimics human touch. This technological advancement allows operators to control robots from afar, feeling what the robots feel as if they were physically present, which dramatically improves the operator's ability to perform complex tasks.
=== HaptX Gloves and the Tactile Telerobot ===
Developed in collaboration with the Converge Robotics Group, the HaptX Gloves are part of the innovative Tactile Telerobot system. This setup enables precise, intuitive control over robotic hands from thousands of miles away, with the gloves providing real-time tactile sensations that simulate actual contact with various objects.
Among the leading developments in this area is the Shadow Dexterous Hand, developed by The Shadow Robot Company in London. This humanoid robot hand is comparable to the average human hand in size and shape and exceeds the human hand in terms of mechanical flexibility and control options.
===Design and Functionality===
The Shadow Dexterous Hand boasts 24 joints and 20 degrees of freedom, surpassing the human hand's complexity. It is designed to mimic the range of movement of a typical human, featuring a sophisticated joint structure in its fingers and thumb:
* Each of the four fingers is equipped with two one-axis joints linking the phalanges and one universal joint at the metacarpal.
* The little finger includes an additional one-axis joint for enhanced palm curl movements.
* The thumb features a one-axis joint for distal movements, a universal joint at the metacarpal, and another one-axis joint to aid palm curling.
* The wrist consists of two joints that provide flexion/extension and adduction/abduction movements.
The Shadow Hand is available in both electric motor-driven and pneumatic muscle-driven models. The electric version operates with DC motors located in the forearm, while the pneumatic version uses antagonistic pairs of air muscles for movement. Each model is equipped with Hall effect sensors in every joint for precise positional feedback. Additional tactile sensing capabilities range from basic pressure sensors to advanced multimodal tactile sensors like the BioTac from Syntouch Inc., enhancing the hand's sensory feedback to mimic human touch and interaction closely.
===Software and Simulation===
Control and simulation of the Shadow Hand are facilitated through the Robot Operating System (ROS), which provides tools for configuration, calibration, and simulation. This integration allows for extensive testing and development in a virtual environment, aiding in the hand's continuous improvement and adaptation for various applications.
d45efe9bf871cbb56a76f119e567bd5059401f2f
Conversational Module
0
237
1008
2024-05-13T22:16:53Z
Budzianowski
19
Created page with "We want to build a simplified conversational pipeline that will allow to talk and give orders to Stompy '''on device'''. We also believe that this could be long-term wise a st..."
wikitext
text/x-wiki
We want to build a simplified conversational pipeline that will allow to talk and give orders to Stompy '''on device'''. We also believe that this could be long-term wise a standalone product
High level observations:
# ASR is 'solved' for non-noisy single user speech, multi-speaker noisy environment with intelligent VAD is a hard problem
# LLMs are 'solved' with 3B models being smart enough to handle dealing with orders and having laid back convo
# TTS is 'solved' for non-conversational (reading news, book) use-cases with large models. Having 'intelligent' fast voice is an open problem
# Building a demo is simple, creating a setup that is robust to the noise and edge-cases is not
==Pipeline parts==
===ASR===
# Latency - most of the system in the proper setup can get below <100ms but knowing when to stop to listen is a VAD problem
# VAD - Voice Activity Detection (fundamental to conversational application), benchmarked typically with https://github.com/snakers4/silero-vad, most of the systems are very simple and super stupid (ML a'la 2015)
# Multispeaker - most of models are clueless when it comes to many speakers environment
# Barge-in - an option to interrupt the system, in any noisy environment completely disrupts the system, none of the existing ASR systems are trained to deal with this properly (ASR doesn't understand the context)
# Noise - in-house data are fundamental to fine-tune big model to the specific environment (20 hours are often enough to ground on the use-case)
# Keyword detection - models that focus on only spotting one word (Alexa, Siri)
# Best conversational production systems - only few: Google, RIVA, Deepgram
===LLM===
# Vision support starts to be available off-the-shelf
===TTS===
# Quality - conversational vs reading abilities, most models don't really understand the context
# Latency - most of open-source model are either fast or slow
# Streaming option - only now there are first model truly streaming-oriented
==Current Setup==
Building on the Jetson through tight Docker compose module due to multiple conflicting requirements. Dusty created a fantastic repository of many e2e setups at https://www.jetson-ai-lab.com/tutorial-intro.html However, typically ASR and TTS are based on RIVA which is not the setup you would want to keep/support. The building process for each model requires a lot of small changes to the setup. Currently, having an e2e setup requires composing different images into a pipeline loop.
0. Audio<br>
The Linux interaction between ALSA, Pulsaudio, sudo vs user is ironically the most demanding part. See https://0pointer.de/blog/projects/guide-to-sound-apis.html or https://www.reddit.com/r/archlinux/comments/ae67oa/lets_talk_about_how_the_linuxarch_sound/.
The most stable approach is to rely on PyAudio to handle dmix issues and often changes to the hardware availability.
1. ASR<br>
* Whisper is good but nothing close to production-ready ASR system
* RIVA is the best option (production ready, reliably) however requires licensing per GPU which is not a good model long term-wse
* Any other options are either bad or not production ready
2. LLM<br>
* Rely on ollama and NanoLLM which supports latests SLMs (below 7B)
* Fast and well behaved under onnxruntime library
3. TTS<br>
* FastPitch (RIVA) - fast but old-style and poor quality
* StylTTS2 - really good quality, requires a big clean up to make it TensorRT ready, currently 200ms on A100
* xTTS ok quality but slow even with TensorRT (dropped)
* PiperTTS (old VITS model) - fast but old-style and poor quality
==Long-term bets==
# E2E
625a4a5f81d764dce16b32aa0f0701d753426c67
1009
1008
2024-05-13T22:17:09Z
Budzianowski
19
wikitext
text/x-wiki
We want to build a simplified conversational pipeline that will allow to talk and give orders to Stompy '''on device'''. We also believe that this could be long-term wise a standalone product
High level observations:
# ASR is 'solved' for non-noisy single user speech, multi-speaker noisy environment with intelligent VAD is a hard problem
# LLMs are 'solved' with 3B models being smart enough to handle dealing with orders and having laid back convo
# TTS is 'solved' for non-conversational (reading news, book) use-cases with large models. Having 'intelligent' fast voice is an open problem
# Building a demo is simple, creating a setup that is robust to the noise and edge-cases is not
==Pipeline parts==
===ASR===
# Latency - most of the system in the proper setup can get below <100ms but knowing when to stop to listen is a VAD problem
# VAD - Voice Activity Detection (fundamental to conversational application), benchmarked typically with https://github.com/snakers4/silero-vad, most of the systems are very simple and super stupid (ML a'la 2015)
# Multispeaker - most of models are clueless when it comes to many speakers environment
# Barge-in - an option to interrupt the system, in any noisy environment completely disrupts the system, none of the existing ASR systems are trained to deal with this properly (ASR doesn't understand the context)
# Noise - in-house data are fundamental to fine-tune big model to the specific environment (20 hours are often enough to ground on the use-case)
# Keyword detection - models that focus on only spotting one word (Alexa, Siri)
# Best conversational production systems - only few: Google, RIVA, Deepgram
===LLM===
# Vision support starts to be available off-the-shelf
===TTS===
# Quality - conversational vs reading abilities, most models don't really understand the context
# Latency - most of open-source model are either fast or slow
# Streaming option - only now there are first model truly streaming-oriented
==Current Setup==
Building on the Jetson through tight Docker compose module due to multiple conflicting requirements. Dusty created a fantastic repository of many e2e setups at https://www.jetson-ai-lab.com/tutorial-intro.html However, typically ASR and TTS are based on RIVA which is not the setup you would want to keep/support. The building process for each model requires a lot of small changes to the setup. Currently, having an e2e setup requires composing different images into a pipeline loop.
0. Audio<br>
The Linux interaction between ALSA, Pulsaudio, sudo vs user is ironically the most demanding part. See https://0pointer.de/blog/projects/guide-to-sound-apis.html or https://www.reddit.com/r/archlinux/comments/ae67oa/lets_talk_about_how_the_linuxarch_sound/.
The most stable approach is to rely on PyAudio to handle dmix issues and often changes to the hardware availability.
1. ASR<br>
* Whisper is good but nothing close to production-ready ASR system
* RIVA is the best option (production ready, reliably) however requires licensing per GPU which is not a good model long term-wse
* Any other options are either bad or not production ready
2. LLM<br>
* Rely on ollama and NanoLLM which supports latests SLMs (below 7B)
* Fast and well behaved under onnxruntime library
3. TTS<br>
* FastPitch (RIVA) - fast but old-style and poor quality
* StylTTS2 - really good quality, requires a big clean up to make it TensorRT ready, currently 200ms on A100
* xTTS ok quality but slow even with TensorRT (dropped)
* PiperTTS (old VITS model) - fast but old-style and poor quality
==Long-term bets==
# E2E
[[Category: Firmware]]
53297fa72d22c084a22b9bd65557b23b431c7b0b
1010
1009
2024-05-13T22:44:48Z
Budzianowski
19
wikitext
text/x-wiki
We want to build a simplified conversational pipeline that will allow to talk and give orders to Stompy '''on device'''. We also believe that this could be long-term wise a standalone product
High level observations:
# ASR is 'solved' for non-noisy single user speech, multi-speaker noisy environment with intelligent VAD is a hard problem
# LLMs are 'solved' with 3B models being smart enough to handle dealing with orders and having laid back convo
# TTS is 'solved' for non-conversational (reading news, book) use-cases with large models. Having 'intelligent' fast voice is an open problem
# Building a demo is simple, creating a setup that is robust to the noise and edge-cases is not
==Pipeline parts==
===ASR===
# Latency - most of the system in the proper setup can get below <100ms but knowing when to stop to listen is a VAD problem
# VAD - Voice Activity Detection (fundamental to conversational application), benchmarked typically with https://github.com/snakers4/silero-vad, most of the systems are very simple and super stupid (ML a'la 2015)
# Multispeaker - most of models are clueless when it comes to many speakers environment
# Barge-in - an option to interrupt the system, in any noisy environment completely disrupts the system, none of the existing ASR systems are trained to deal with this properly (ASR doesn't understand the context)
# Noise - in-house data are fundamental to fine-tune big model to the specific environment (20 hours are often enough to ground on the use-case)
# Keyword detection - models that focus on only spotting one word (Alexa, Siri)
# Best conversational production systems - only few: Google, RIVA, Deepgram
===LLM===
# Vision support starts to be available off-the-shelf
===TTS===
# Quality - conversational vs reading abilities, most models don't really understand the context
# Latency - most of open-source model are either fast or slow
# Streaming option - only now there are first model truly streaming-oriented
==Current Setup==
Building on the Jetson through tight Docker compose module due to multiple conflicting requirements. Dusty created a fantastic repository of many e2e setups at https://www.jetson-ai-lab.com/tutorial-intro.html However, typically ASR and TTS are based on RIVA which is not the setup you would want to keep/support. The building process for each model requires a lot of small changes to the setup. Currently, having an e2e setup requires composing different images into a pipeline loop.
0. Audio<br>
The Linux interaction between ALSA, Pulsaudio, sudo vs user is ironically the most demanding part. See https://0pointer.de/blog/projects/guide-to-sound-apis.html or https://www.reddit.com/r/archlinux/comments/ae67oa/lets_talk_about_how_the_linuxarch_sound/.
The most stable approach is to rely on PyAudio to handle dmix issues and often changes to the hardware availability.
1. ASR<br>
* Whisper is good but nothing close to production-ready ASR system
* RIVA is the best option (production ready, reliably) however requires licensing per GPU which is not a good model long term-wse
* Any other options are either bad or not production ready
2. LLM<br>
* Rely on ollama and NanoLLM which supports latests SLMs (below 7B)
* Fast and well behaved under onnxruntime library
3. TTS<br>
* FastPitch (RIVA) - fast but old-style and poor quality
* [https://github.com/yl4579/StyleTTS2 StyleTTS2] - really good quality, requires a big clean up to make it TensorRT ready, currently 200ms on A100
* xTTS ok quality but slow even with TensorRT (dropped)
* PiperTTS (old VITS model) - fast but old-style and poor quality
==Long-term bets==
# E2E
[[Category: Firmware]]
26d24f6715766da2e10266d23b8e6dde8eed9d5b
LiDAR
0
234
1011
1002
2024-05-14T17:22:25Z
Vrtnis
21
wikitext
text/x-wiki
This guide provides insights and recommendations for choosing cost-effective 3D LiDAR systems for various applications.
This guide is incomplete and a work in progress; you can help by expanding it!
==== Unitree L1 ====
This model is one of the most affordable 3D LiDAR systems that might fit the budget constraints of students and hobbyists. While not perfect, it offers basic functionality needed for entry-level 3D scanning tasks.
==== Stereoscopic Cameras ====
An alternative to traditional LiDAR systems, stereoscopic cameras use dual lenses to capture spatial data, providing a 3D perception at a potentially lower cost. This technology can be a viable option for those unable to afford dedicated 3D LiDAR systems.
=== Understanding LiDAR Technologies ===
==== Regular 2D LiDAR ====
These devices create a 2D plane of points by rotating around a vertical axis. They are generally more affordable and can offer a high sample rate, making them suitable for applications requiring planar data.
==== Stacked 2D LiDARs (Multi-layer 3D LiDAR) ====
Often marketed as 3D LiDAR, these are essentially multiple 2D LiDARs arranged vertically at different angles, rotating around the same axis. They tend to be expensive and do not produce a dense 3D point cloud unless moved in space, making them less ideal for static applications.
==== 3D Scanners ====
These systems use a 2D LiDAR tilted 90 degrees, combined with a secondary mechanism that rotates the device around its longitudinal axis. While they generate detailed 3D point clouds, they do so slowly and are not suited for real-time applications.
=== Budget-Friendly Alternatives and DIY Solutions ===
==== Smartphone Depth Sensors ====
For those with access to newer smartphones, utilizing built-in depth sensors can be a cost-effective way to gather 3D data. These devices, like the iPhone's TrueDepth camera, can generate usable 2D depth maps or 3D point clouds.
==== DIY 3D Scanners ====
Constructing a 3D scanner from components such as a 2D LiDAR, a Raspberry Pi, and a camera can be an educational and affordable project. This approach requires more technical expertise but allows for customization and potentially lower costs.
Choosing the right LiDAR system depends on the specific needs of the project and the available budget. For projects where real-time data is not critical, alternatives like stereoscopic cameras or DIY solutions might be adequate. However, for more demanding applications, investing in a higher-quality 3D LiDAR may be necessary.
{| class="wikitable"
|+
! Product !! Brand !! SKU !! Approx Price !! Description
|-
| RPlidar S2 360° Laser Scanner (30 m) || SLAMTEC || RB-Rpk-20 || $399.00 || Offers 30 meters radius ranging distance. Adopts the low power infrared laser light. Provides excellent ranging accuracy. Works well both in an indoor and outdoor environment. Proof Level: IP65
|-
| Hokuyo UST-10LX Scanning Laser Rangefinder || HOKUYO || RB-Hok-24 || $1,670.00 || Small, accurate, high-speed scanning laser range finder. Obstacle detection and localization of autonomous robots. Detection distance (maximum): 30m. Light source: Laser semiconductor (905nm). Scan speed: 25ms (Motor speed 2400rpm)
|-
| RPLidar A1M8 - 360 Degree Laser Scanner Development Kit || SLAMTEC || RB-Rpk-03 || $85.00 || 360° Omnidirectional Laser Scan. 5.5 - 10Hz Adaptive Scan Frequency. Sample Frequency: 4000 - 8000Hz. Distance range: 0.15 - 12m. Works With SLAMAIg
|-
| Slamtec RPLIDAR A3 360° Laser Scanner (25 m) || SLAMTEC || RB-Rpk-07 || $599.00 || White object: 25 m, Dark object: 10 m (Enhanced mode). White object: 20 m, Dark object: TBD (Outdoor mode). Sample Rate: 16000 - 10000 times per second. Scan Rate: 10-20 Hz. Angular Resolution: 0.225°. Supports former SDK protocols
|-
| Slamtec Mapper Pro M2M2 360° Laser Mapping Sensor (40 m) || SLAMTEC || RB-Rpk-15 || $699.00 || Features a 40 m laser mapping sensor. Provides large scenarios and high-quality mapping. Offers integrating map building and real-time localization and navigation. Offers tilting compensation and fast-moving
|-
| LIDAR-Lite 3 Laser Rangefinder || GARMIN || RB-Pli-06 || $124.95 || Compact 48mm x 40mm x 20mm module with 40m measuring range. Signal Processing Improvements offer 5X Faster Measurement Speeds. Improved I2C Communications and assignable I2C Addressing. Great for drones, robotics and other demanding applications
|-
| Benewake TF03 LIDAR LED Rangefinder IP67 (100 m) || BENEWAKE || RB-Ben-12 || $218.46 || Features excellent performance and a compact size. Upgrades more than ten key indicators and opens multiple expansion functions. Is used in the fields of intelligent transportation and industrial safety warning. Provides an industrial grade, long-distance of 100 m
|-
| RPLIDAR A2M12 360° Laser Range Scanner || SLAMTEC || RB-Rpk-22 || $229.00 || Offers a low-cost 360° Laser Range Scanner. Comes with a rotation speed detection and adaptive system. Features low noise brushless motor. Adopts the low-power infrared laser light. Can run smoothly without any noise
|-
| RPLIDAR A2M8 360° Laser Scanner (DISCONTINUED) || SLAMTEC || RB-Rpk-02 || $319.00 || Low Cost 360 Degree Laser Range Scanner. Sample Frequency: 2000 - 8000 Hz. Scan Rate: 5 - 15Hz. Distance Range: 0.15 - 12m. Angular Resolution: 0.45 - 1.35°. Barrel connector available Small Barrel Connector Assembly 0.7x2.35 mm (Needs Soldering)
|-
| LIDAR-Lite 3 Laser Rangefinder High Performance (LLV3HP) || GARMIN || RB-Pli-17 || $149.05 || Compact 53mm x 33mm x 24mm module with 40m measuring range. Sturdy IPX7-rated housing for drone, robot or unmanned vehicle applications. User configurability allows adjustment between accuracy, operating range and measurement time. Communicates via I2C and PWM. Low power consumption; requires less than 85 milliamps during acquisition
|-
| LD19 D300 LiDAR Developer Kit, 360 DToF Laser Scanner, Supports ROS1/2, Raspberry Pi & Jetson Nano || HIWONDER || RM-HIWO-040 || $98.99 || Long Service Time of 10,000 hours. Resistance to Strong Light of 30,000 lux. Distance Measurement up to a Radius of 12 m. All-round Laser Scanning of 360 degrees
|-
| Benewake TFMINI Plus Micro LIDAR Module UART/I2C (12 m) || BENEWAKE || RB-Ben-09 || $49.90 || Features a cost-effective LiDAR. Offers a small-size and low power consumption. Improves the frame rate. Introduces IP65 enclosures. Requires TTL-USB converter to change between UART and I2C
|-
| RPLIDAR S1 360° Laser Scanner (40 m) || SLAMTEC || RB-Rpk-13 || $649.00 || Can take up to 9200 samples of laser ranging per second. Offers a high rotation speed. Equipped with SLAMTEC patented OPTMAG technology
|-
| Benewake TF-Luna 8m LiDAR Distance Sensor || BENEWAKE || RB-Ben-17 || $19.98 || Features a low-cost ranging LiDAR module. Has an operating range of 0.2 - 8 m. Gives unique optical and electrical design. Provides highly stable, accurate, sensitive range detection. Supports power saving mode for power-sensitive applications. Suitable for scenarios with strict load requirements
|-
| Slamtec RPlidar S3 360° Laser Scanner (40 m) || SLAMTEC || RB-Rpk-29 || $549.00 || Offers a next-generation low-cost 360-degree 2D laser scanner. Smaller size, better performance. Comes with a rotation speed detection and adaptive system. Performs a 2D 360-degree scan within a 40-meter range. Low reflectivity at 15 meters measurement radius. Ideal for both outdoor and indoor environments. Compatible with the SLAMTEC ecosystem
|-
| YDLIDAR TG30 360° Laser Scanner (30 m) || YDLIDAR || RB-Ydl-09 || $399.00 || Offers 360-degree omnidirectional scanning. Compact structure, suitable for integration. Provides TOF (Time of Flight) ranging technology. Enclosed housing and IP65 proof level. Equipped with related optics, electricity, and algorithm
|-
| YDLIDAR G2 360° Laser Scanner || YDLIDAR || RB-Ydl-05 || $159.00 || Features a 360° Omnidirectional scanning range. Offers small distance error, stable performance and high accuracy. Has a 5-12 Hz adaptive scanning frequency
|-
| Benewake TF02-Pro LIDAR LED Rangefinder IP65 (40m) || BENEWAKE || RB-Ben-14 || $86.39 || Features a single-point ranging LiDAR. Can achieve stable, accuracy, sensitive and high-frequency range detection. Offers a sensor range up to 40 meters. Has an ambient light resistance up to 100 Klux
|}
00b24fa06ed93f8df6481a771ae4ddf2da094084
1012
1011
2024-05-14T17:28:06Z
Vrtnis
21
wikitext
text/x-wiki
This guide provides insights and recommendations for choosing cost-effective 3D LiDAR systems for various applications.
This guide is incomplete and a work in progress; you can help by expanding it!
==== Unitree L1 ====
This model is one of the most affordable 3D LiDAR systems that might fit the budget constraints of students and hobbyists. While not perfect, it offers basic functionality needed for entry-level 3D scanning tasks.
==== Stereoscopic Cameras ====
An alternative to traditional LiDAR systems, stereoscopic cameras use dual lenses to capture spatial data, providing a 3D perception at a potentially lower cost. This technology can be a viable option for those unable to afford dedicated 3D LiDAR systems.
=== Understanding LiDAR Technologies ===
==== Regular 2D LiDAR ====
These devices create a 2D plane of points by rotating around a vertical axis. They are generally more affordable and can offer a high sample rate, making them suitable for applications requiring planar data.
==== Stacked 2D LiDARs (Multi-layer 3D LiDAR) ====
Often marketed as 3D LiDAR, these are essentially multiple 2D LiDARs arranged vertically at different angles, rotating around the same axis. They tend to be expensive and do not produce a dense 3D point cloud unless moved in space, making them less ideal for static applications.
==== 3D Scanners ====
These systems use a 2D LiDAR tilted 90 degrees, combined with a secondary mechanism that rotates the device around its longitudinal axis. While they generate detailed 3D point clouds, they do so slowly and are not suited for real-time applications.
=== Budget-Friendly Alternatives and DIY Solutions ===
==== Smartphone Depth Sensors ====
For those with access to newer smartphones, utilizing built-in depth sensors can be a cost-effective way to gather 3D data. These devices, like the iPhone's TrueDepth camera, can generate usable 2D depth maps or 3D point clouds.
==== DIY 3D Scanners ====
Constructing a 3D scanner from components such as a 2D LiDAR, a Raspberry Pi, and a camera can be an educational and affordable project. This approach requires more technical expertise but allows for customization and potentially lower costs.
Choosing the right LiDAR system depends on the specific needs of the project and the available budget. For projects where real-time data is not critical, alternatives like stereoscopic cameras or DIY solutions might be adequate. However, for more demanding applications, investing in a higher-quality 3D LiDAR may be necessary.
{| class="wikitable"
|+
! Product !! Brand !! Approx Price !! Description
|-
| RPlidar S2 360° Laser Scanner (30 m) || SLAMTEC || $399.00 || Offers 30 meters radius ranging distance. Adopts the low power infrared laser light. Provides excellent ranging accuracy. Works well both in an indoor and outdoor environment. Proof Level: IP65
|-
| Hokuyo UST-10LX Scanning Laser Rangefinder || HOKUYO || $1,670.00 || Small, accurate, high-speed scanning laser range finder. Obstacle detection and localization of autonomous robots. Detection distance (maximum): 30m. Light source: Laser semiconductor (905nm). Scan speed: 25ms (Motor speed 2400rpm)
|-
| RPLidar A1M8 - 360 Degree Laser Scanner Development Kit || SLAMTEC || $85.00 || 360° Omnidirectional Laser Scan. 5.5 - 10Hz Adaptive Scan Frequency. Sample Frequency: 4000 - 8000Hz. Distance range: 0.15 - 12m. Works With SLAMAIg
|-
| Slamtec RPLIDAR A3 360° Laser Scanner (25 m) || SLAMTEC || $599.00 || White object: 25 m, Dark object: 10 m (Enhanced mode). White object: 20 m, Dark object: TBD (Outdoor mode). Sample Rate: 16000 - 10000 times per second. Scan Rate: 10-20 Hz. Angular Resolution: 0.225°. Supports former SDK protocols
|-
| Slamtec Mapper Pro M2M2 360° Laser Mapping Sensor (40 m) || SLAMTEC || $699.00 || Features a 40 m laser mapping sensor. Provides large scenarios and high-quality mapping. Offers integrating map building and real-time localization and navigation. Offers tilting compensation and fast-moving
|-
| LIDAR-Lite 3 Laser Rangefinder || GARMIN || $124.95 || Compact 48mm x 40mm x 20mm module with 40m measuring range. Signal Processing Improvements offer 5X Faster Measurement Speeds. Improved I2C Communications and assignable I2C Addressing. Great for drones, robotics and other demanding applications
|-
| Benewake TF03 LIDAR LED Rangefinder IP67 (100 m) || BENEWAKE || $218.46 || Features excellent performance and a compact size. Upgrades more than ten key indicators and opens multiple expansion functions. Is used in the fields of intelligent transportation and industrial safety warning. Provides an industrial grade, long-distance of 100 m
|-
| RPLIDAR A2M12 360° Laser Range Scanner || SLAMTEC || $229.00 || Offers a low-cost 360° Laser Range Scanner. Comes with a rotation speed detection and adaptive system. Features low noise brushless motor. Adopts the low-power infrared laser light. Can run smoothly without any noise
|-
| RPLIDAR A2M8 360° Laser Scanner (DISCONTINUED) || SLAMTEC || $319.00 || Low Cost 360 Degree Laser Range Scanner. Sample Frequency: 2000 - 8000 Hz. Scan Rate: 5 - 15Hz. Distance Range: 0.15 - 12m. Angular Resolution: 0.45 - 1.35°. Barrel connector available Small Barrel Connector Assembly 0.7x2.35 mm (Needs Soldering)
|-
| LIDAR-Lite 3 Laser Rangefinder High Performance (LLV3HP) || GARMIN || $149.05 || Compact 53mm x 33mm x 24mm module with 40m measuring range. Sturdy IPX7-rated housing for drone, robot or unmanned vehicle applications. User configurability allows adjustment between accuracy, operating range and measurement time. Communicates via I2C and PWM. Low power consumption; requires less than 85 milliamps during acquisition
|-
| LD19 D300 LiDAR Developer Kit, 360 DToF Laser Scanner, Supports ROS1/2, Raspberry Pi & Jetson Nano || HIWONDER || $98.99 || Long Service Time of 10,000 hours. Resistance to Strong Light of 30,000 lux. Distance Measurement up to a Radius of 12 m. All-round Laser Scanning of 360 degrees
|-
| Benewake TFMINI Plus Micro LIDAR Module UART/I2C (12 m) || BENEWAKE || $49.90 || Features a cost-effective LiDAR. Offers a small-size and low power consumption. Improves the frame rate. Introduces IP65 enclosures. Requires TTL-USB converter to change between UART and I2C
|-
| RPLIDAR S1 360° Laser Scanner (40 m) || SLAMTEC || $649.00 || Can take up to 9200 samples of laser ranging per second. Offers a high rotation speed. Equipped with SLAMTEC patented OPTMAG technology
|-
| Benewake TF-Luna 8m LiDAR Distance Sensor || BENEWAKE || $19.98 || Features a low-cost ranging LiDAR module. Has an operating range of 0.2 - 8 m. Gives unique optical and electrical design. Provides highly stable, accurate, sensitive range detection. Supports power saving mode for power-sensitive applications. Suitable for scenarios with strict load requirements
|-
| Slamtec RPlidar S3 360° Laser Scanner (40 m) || SLAMTEC || $549.00 || Offers a next-generation low-cost 360-degree 2D laser scanner. Smaller size, better performance. Comes with a rotation speed detection and adaptive system. Performs a 2D 360-degree scan within a 40-meter range. Low reflectivity at 15 meters measurement radius. Ideal for both outdoor and indoor environments. Compatible with the SLAMTEC ecosystem
|-
| YDLIDAR TG30 360° Laser Scanner (30 m) || YDLIDAR || $399.00 || Offers 360-degree omnidirectional scanning. Compact structure, suitable for integration. Provides TOF (Time of Flight) ranging technology. Enclosed housing and IP65 proof level. Equipped with related optics, electricity, and algorithm
|-
| YDLIDAR G2 360° Laser Scanner || YDLIDAR || $159.00 || Features a 360° Omnidirectional scanning range. Offers small distance error, stable performance and high accuracy. Has a 5-12 Hz adaptive scanning frequency
|-
| Benewake TF02-Pro LIDAR LED Rangefinder IP65 (40m) || BENEWAKE || $86.39 || Features a single-point ranging LiDAR. Can achieve stable, accuracy, sensitive and high-frequency range detection. Offers a sensor range up to 40 meters. Has an ambient light resistance up to 100 Klux
|}
fe3724ba4ea0a57290524d92496700097cbf78f3
Honda Robotics
0
238
1013
2024-05-14T17:42:54Z
Vrtnis
21
Created page with "[https://global.honda/products/robotics.html Honda Robotics] is a robotics division of Honda based in Japan. They are known for developing the advanced humanoid robot [[ASIMO]..."
wikitext
text/x-wiki
[https://global.honda/products/robotics.html Honda Robotics] is a robotics division of Honda based in Japan. They are known for developing the advanced humanoid robot [[ASIMO]].
{{infobox company
| name = Honda Robotics
| country = Japan
| website_link = https://global.honda/products/robotics.html
| robots = [[ASIMO]]
}}
[[Category:Companies]]
a9af9fc3a6032570aa5052e7d912da16829198cb
ASIMO
0
239
1014
2024-05-14T17:46:15Z
Vrtnis
21
Created page with "ASIMO is a humanoid robot developed by [[Honda Robotics]], a division of the Japanese automotive manufacturer Honda. ASIMO is one of the most advanced humanoid robots and is k..."
wikitext
text/x-wiki
ASIMO is a humanoid robot developed by [[Honda Robotics]], a division of the Japanese automotive manufacturer Honda. ASIMO is one of the most advanced humanoid robots and is known for its remarkable walking and running capabilities.
{{infobox robot
| name = ASIMO
| organization = [[Honda Robotics]]
| height = 130 cm (4 ft 3 in)
| weight = 54 kg (119 lb)
| video_link = https://www.youtube.com/watch?v=Q3C5sc8EIVI
| cost = Unknown
}}
== Development ==
Honda initiated the development of ASIMO in the 1980s, with the goal of creating a humanoid robot that could assist people with everyday tasks. ASIMO, an acronym for Advanced Step in Innovative Mobility, was first introduced in 2000 and has undergone several upgrades since then.
== Design ==
ASIMO stands at a height of 130 cm and weighs approximately 54 kilograms. Its design focuses on mobility, allowing it to walk, run, climb stairs, and perform a variety of movements that mimic human actions. It is equipped with advanced sensors and actuators to navigate and interact with its environment safely.
== Features ==
ASIMO features advanced recognition technologies, including the ability to recognize faces, voices, and gestures. It can navigate complex environments, avoid obstacles, and perform tasks such as carrying objects, opening doors, and pushing carts. ASIMO's user-friendly interface and human-like movements make it suitable for a range of applications, from public demonstrations to potential domestic assistance.
== Impact ==
ASIMO has been influential in the field of robotics, demonstrating the potential for humanoid robots to assist in daily activities and enhance human-robot interaction. It has also played a significant role in Honda's research and development of robotics technologies, contributing to advancements in mobility, AI, and automation.
== References ==
[https://global.honda/products/robotics.html Honda Robotics official website]
[https://www.youtube.com/watch?v=Q3C5sc8EIVI Presentation of ASIMO by Honda]
[[Category:Robots]]
[[Category:Humanoid Robots]]
[[Category:Honda Robotics]]
d13a632d58f80b7b05870c1186e4578a8a2d3071
1015
1014
2024-05-14T17:47:47Z
Vrtnis
21
wikitext
text/x-wiki
ASIMO is a humanoid robot developed by [[Honda Robotics]], a division of the Japanese automotive manufacturer Honda. ASIMO is one of the more advanced humanoid robots and is known for its remarkable walking and running capabilities.
{{infobox robot
| name = ASIMO
| organization = [[Honda Robotics]]
| height = 130 cm (4 ft 3 in)
| weight = 54 kg (119 lb)
| video_link = https://www.youtube.com/watch?v=OkA4foR7bbk
| cost = Unknown
}}
== Development ==
Honda initiated the development of ASIMO in the 1980s, with the goal of creating a humanoid robot that could assist people with everyday tasks. ASIMO, an acronym for Advanced Step in Innovative Mobility, was first introduced in 2000 and has undergone several upgrades since then.
== Design ==
ASIMO stands at a height of 130 cm and weighs approximately 54 kilograms. Its design focuses on mobility, allowing it to walk, run, climb stairs, and perform a variety of movements that mimic human actions. It is equipped with advanced sensors and actuators to navigate and interact with its environment safely.
== Features ==
ASIMO features advanced recognition technologies, including the ability to recognize faces, voices, and gestures. It can navigate complex environments, avoid obstacles, and perform tasks such as carrying objects, opening doors, and pushing carts. ASIMO's user-friendly interface and human-like movements make it suitable for a range of applications, from public demonstrations to potential domestic assistance.
== Impact ==
ASIMO has been influential in the field of robotics, demonstrating the potential for humanoid robots to assist in daily activities and enhance human-robot interaction. It has also played a significant role in Honda's research and development of robotics technologies, contributing to advancements in mobility, AI, and automation.
== References ==
[https://global.honda/products/robotics.html Honda Robotics official website]
[https://www.youtube.com/watch?v=Q3C5sc8EIVI Presentation of ASIMO by Honda]
[[Category:Robots]]
[[Category:Humanoid Robots]]
[[Category:Honda Robotics]]
b5c34c69e72946434e5ce78bc00e829f0cf59a9c
1016
1015
2024-05-14T17:48:07Z
Vrtnis
21
wikitext
text/x-wiki
ASIMO is a humanoid robot developed by [[Honda Robotics]], a division of the Japanese automotive manufacturer Honda. ASIMO is one of the more advanced humanoid robots and is known for its remarkable walking and running capabilities.
{{infobox robot
| name = ASIMO
| organization = [[Honda Robotics]]
| height = 130 cm (4 ft 3 in)
| weight = 54 kg (119 lb)
| video_link = https://www.youtube.com/watch?v=OkA4foR7bbk
| cost = Unknown
}}
== Development ==
Honda initiated the development of ASIMO in the 1980s, with the goal of creating a humanoid robot that could assist people with everyday tasks. ASIMO, an acronym for Advanced Step in Innovative Mobility, was first introduced in 2000 and has undergone several upgrades since then.
== Design ==
ASIMO stands at a height of 130 cm and weighs approximately 54 kilograms. Its design focuses on mobility, allowing it to walk, run, climb stairs, and perform a variety of movements that mimic human actions. It is equipped with advanced sensors and actuators to navigate and interact with its environment safely.
== Features ==
ASIMO features advanced recognition technologies, including the ability to recognize faces, voices, and gestures. It can navigate complex environments, avoid obstacles, and perform tasks such as carrying objects, opening doors, and pushing carts. ASIMO's user-friendly interface and human-like movements make it suitable for a range of applications, from public demonstrations to potential domestic assistance.
== Impact ==
ASIMO has been influential in the field of robotics, demonstrating the potential for humanoid robots to assist in daily activities and enhance human-robot interaction. It has also played a significant role in Honda's research and development of robotics technologies, contributing to advancements in mobility, AI, and automation.
== References ==
[https://global.honda/products/robotics.html Honda Robotics official website]
[https://www.youtube.com/watch?v=Q3C5sc8EIVI Presentation of ASIMO by Honda]
[[Category:Robots]]
99403ef69c95c304c11fc111e41cd8590e6254aa
Main Page
0
1
1017
996
2024-05-14T17:49:00Z
Vrtnis
21
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|}
7817794212da50561023a9bba635198d3259a7b9
1023
1017
2024-05-15T03:06:10Z
Modeless
7
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|}
e43add7d64cc87dc9bb492bcc764d784001b8c9f
1026
1023
2024-05-15T19:18:10Z
Ben
2
Move list of humanoid robots up higher
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
b48df7d338fb15eac12f6e336f39060ec79e6f2d
1034
1026
2024-05-15T19:35:51Z
Vrtnis
21
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Reinforcement Learning]]
| Resources related to understanding reinforcement learning
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
2c926d658882abfef90055bc3612cd3603eb98ec
1037
1034
2024-05-15T22:43:48Z
Budzianowski
19
/* Resources */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[MyActuator X-Series]]
| MIT Cheetah-like quasi-direct drive actuator, with planetary gears
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
751b964bb4b17ebeee38374b13adaf0fc24b3446
1044
1037
2024-05-16T16:19:45Z
Ben
2
/* List of Actuators */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
==== Resources ====
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
00a93ee6944d43acd5dea68aa2dc0f9071e18547
1045
1044
2024-05-16T17:37:59Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
32c68423867181eb55e174ce61c5eaa627dfd14f
K-Scale CANdaddy
0
235
1018
1001
2024-05-14T20:18:03Z
Vedant
24
wikitext
text/x-wiki
=== MCU ===
* STM32
** Programming mode button (boot / reset)
=== USB ===
* adsfasdfasdf
=== Buttons ===
* Button debouncer
** TODO: Add the IC name
=== LCD ===
* How do we connect I2C chip to the LCD screen?
=== CAN ===
* How to get supply-side VCC without having it on the bus (from supply-side GND)?
* What do we do if supply-side GND is missing?
=== Voltage ===
* Load sharing between USB and battery
0617fff5c391a84cb45cb113c9559124d26ddf85
1025
1018
2024-05-15T05:22:24Z
Ben
2
wikitext
text/x-wiki
=== MCU ===
* STM32
** Programming mode button (boot / reset)
=== USB ===
* adsfasdfasdf
=== Buttons ===
* Button debouncer
** TODO: Add the IC name
=== LCD ===
* How do we connect I2C chip to the LCD screen?
=== CAN ===
* How to get supply-side VCC without having it on the bus (from supply-side GND)?
* What do we do if supply-side GND is missing?
=== Voltage ===
* Load sharing between USB and battery
=== Questions ===
# How does the switching regulator mechanism work?
# Do digital logic levels depend on source voltage? For example, let's say that a 3.3V powered device is sending signals to a 5V power device. From my understanding of the datasheet, a direct connection between the signals should not be allowed. However, why is this? What are the internal mechanisms of the pins that make it so that the logic levels need to be based off of the source voltage?
# I understand that there is a way to both use the battery as a source as well as to charge it using the USB. However, wouldn't this mean not being able to put a diode within the circuit directing current in one direction? Isn't this a hazard, destroying the circuit the moment a bit of noise comes through?
# Power Ground in the context of a switching regulator for our circuit to amplify 3.7 V battery to 5 V source. Do we just connect Power Ground to Battery Ground?
9a518a08c5e61fcf2c20fd2ccc999b8210c594d1
File:Unitree g1.png
6
240
1019
2024-05-14T21:11:17Z
Ben
2
wikitext
text/x-wiki
Unitree g1
9157f7dba8762187f74db4e9a5a3a4c9ec00a661
Template:Infobox robot
10
28
1020
182
2024-05-14T21:14:26Z
Ben
2
wikitext
text/x-wiki
{{infobox
| name = {{{name}}}
| key1 = Name
| value1 = {{{name}}}
| key2 = Organization
| value2 = {{{organization|}}}
| key3 = Video
| value3 = {{#if: {{{video_link|}}} | [{{{video_link}}} Video] }}
| key4 = Cost
| value4 = {{{cost|}}}
| key5 = Height
| value5 = {{{height|}}}
| key6 = Weight
| value6 = {{{weight|}}}
| key7 = Speed
| value7 = {{{speed|}}}
| key8 = Lift Force
| value8 = {{{lift_force|}}}
| key9 = Battery Life
| value9 = {{{battery_life|}}}
| key10 = Battery Capacity
| value10 = {{{battery_capacity|}}}
| key11 = Purchase
| value11 = {{#if: {{{purchase_link|}}} | [{{{purchase_link}}} Link] }}
| key12 = Number Made
| value12 = {{{number_made|}}}
| key13 = DoF
| value13 = {{{dof|}}}
| key14 = Status
| value14 = {{{status|}}}
| key15 = Notes
| value15 = {{{notes|}}}
}}
ae8d9b2da5170a553c8ba59d56d16d988c18821e
G1
0
233
1021
997
2024-05-14T21:16:43Z
Ben
2
wikitext
text/x-wiki
[[File:Unitree g1.png|thumb]]
The G1 humanoid robot is an upcoming humanoid robot from [[Unitree]].
{{infobox robot
| name = G1
| organization = [[Unitree]]
| video_link = https://mp.weixin.qq.com/s/RGNVRazZqDn3y_Ijemc5Kw
| cost = 16000 USD
| height = 127 cm
| weight = 35 kg
| speed =
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof = 23
| status = Preorders
}}
{{infobox robot
| name = G1 Edu Standard
| organization = [[Unitree]]
| cost = 31900 USD
| notes = Improved torque, warranty
}}
{{infobox robot
| name = G1 Edu Plus
| organization = [[Unitree]]
| cost = 34900 USD
| notes = Docking station
}}
{{infobox robot
| name = G1 Edu Smart
| organization = [[Unitree]]
| cost = 43900 USD
| notes = 3 waist DoFs instead of 1, more arm DoFs
}}
{{infobox robot
| name = G1 Edu Ultimate
| organization = [[Unitree]]
| cost = 53900 USD
| notes = Comes with force-controlled 3-finger dexterous hands
}}
11b1e60701dbf141ea569693269ca05438fb79c1
1022
1021
2024-05-14T21:20:19Z
Ben
2
wikitext
text/x-wiki
[[File:Unitree g1.png|thumb]]
The G1 humanoid robot is an upcoming humanoid robot from [[Unitree]].
{{infobox robot
| name = G1
| organization = [[Unitree]]
| video_link = https://mp.weixin.qq.com/s/RGNVRazZqDn3y_Ijemc5Kw
| cost = 16000 USD
| height = 127 cm
| weight = 35 kg
| speed =
| lift_force =
| battery_life =
| battery_capacity = 9000 mAh
| purchase_link =
| number_made =
| dof = 23
| status = Preorders
}}
{{infobox robot
| name = G1 Edu Standard
| organization = [[Unitree]]
| cost = 31900 USD
| notes = Improved torque, warranty
}}
{{infobox robot
| name = G1 Edu Plus
| organization = [[Unitree]]
| cost = 34900 USD
| notes = Docking station
}}
{{infobox robot
| name = G1 Edu Smart
| organization = [[Unitree]]
| cost = 43900 USD
| notes = 3 waist DoFs instead of 1, more arm DoFs
}}
{{infobox robot
| name = G1 Edu Ultimate
| organization = [[Unitree]]
| cost = 53900 USD
| notes = Comes with force-controlled 3-finger dexterous hands
}}
303bd7bc226ac0180d7db51ff129b963fff0a01a
Surena IV
0
241
1024
2024-05-15T03:06:40Z
Modeless
7
Created page with "A humanoid built at the University of Tehran. https://surenahumanoid.com/"
wikitext
text/x-wiki
A humanoid built at the University of Tehran. https://surenahumanoid.com/
61aa82a1b4c689d70dffa1f49952dc7b06cb0f8a
SoftBank Robotics
0
242
1027
2024-05-15T19:22:06Z
Vrtnis
21
Created page with "[https://www.softbankrobotics.com/ SoftBank Robotics] is a robotics division of SoftBank Group, based in Japan. They are known for developing advanced humanoid robots such as..."
wikitext
text/x-wiki
[https://www.softbankrobotics.com/ SoftBank Robotics] is a robotics division of SoftBank Group, based in Japan. They are known for developing advanced humanoid robots such as [[Pepper]] and [[NAO]].
{{infobox company
| name = SoftBank Robotics
| country = Japan
| website_link = https://www.softbankrobotics.com/
| robots = [[Pepper]], [[NAO]]
}}
[[Category:Companies]]
74d615743cf5ea836b0ef26ac27fae0687e453b3
1032
1027
2024-05-15T19:34:20Z
Vrtnis
21
Vrtnis moved page [[Softbank Robotics]] to [[SoftBank Robotics]]: Correct title capitalization
wikitext
text/x-wiki
[https://www.softbankrobotics.com/ SoftBank Robotics] is a robotics division of SoftBank Group, based in Japan. They are known for developing advanced humanoid robots such as [[Pepper]] and [[NAO]].
{{infobox company
| name = SoftBank Robotics
| country = Japan
| website_link = https://www.softbankrobotics.com/
| robots = [[Pepper]], [[NAO]]
}}
[[Category:Companies]]
74d615743cf5ea836b0ef26ac27fae0687e453b3
Pepper
0
243
1028
2024-05-15T19:25:00Z
Vrtnis
21
Created page with "Pepper is a humanoid robot developed by [[Softbank Robotics]], a division of SoftBank Group. Pepper is designed to interact with humans and is used in various customer service..."
wikitext
text/x-wiki
Pepper is a humanoid robot developed by [[Softbank Robotics]], a division of SoftBank Group. Pepper is designed to interact with humans and is used in various customer service and retail environments.
{{infobox robot
| name = Pepper
| organization = [[Softbank Robotics]]
| height = 121 cm (4 ft)
| weight = 28 kg (62 lbs)
| video_link = https://www.youtube.com/watch?v=2GhUd0OJdJw
| cost = Approximately $1,800
}}
Pepper was introduced by SoftBank Robotics in June 2014. It is designed to understand and respond to human emotions, making it suitable for roles in customer service, retail, and healthcare.
== References ==
[https://www.softbankrobotics.com/ SoftBank Robotics official website]
[https://www.youtube.com/watch?v=2GhUd0OJdJw Presentation of Pepper by SoftBank Robotics]
[[Category:Robots]]
[[Category:Humanoid Robots]]
[[Category:SoftBank Robotics]]
a28a621fc3e7afb36cc3d20c6fdb5b733e8b49b8
1029
1028
2024-05-15T19:26:11Z
Vrtnis
21
wikitext
text/x-wiki
Pepper is a humanoid robot developed by [[Softbank Robotics]], a division of SoftBank Group. Pepper is designed to interact with humans and is used in various customer service and retail environments.
{{infobox robot
| name = Pepper
| organization = [[Softbank Robotics]]
| height = 121 cm (4 ft)
| weight = 28 kg (62 lbs)
| video_link = https://www.youtube.com/watch?v=kr05reBxVRs
| cost = Approximately $1,800
}}
Pepper was introduced by SoftBank Robotics in June 2014. It is designed to understand and respond to human emotions, making it suitable for roles in customer service, retail, and healthcare.
== References ==
[https://www.softbankrobotics.com/ SoftBank Robotics official website]
[https://www.youtube.com/watch?v=2GhUd0OJdJw Presentation of Pepper by SoftBank Robotics]
[[Category:Robots]]
[[Category:Humanoid Robots]]
[[Category:SoftBank Robotics]]
9e0192c414352ae95ba759d3fc4088798349a1b1
1030
1029
2024-05-15T19:26:31Z
Vrtnis
21
/* References */
wikitext
text/x-wiki
Pepper is a humanoid robot developed by [[Softbank Robotics]], a division of SoftBank Group. Pepper is designed to interact with humans and is used in various customer service and retail environments.
{{infobox robot
| name = Pepper
| organization = [[Softbank Robotics]]
| height = 121 cm (4 ft)
| weight = 28 kg (62 lbs)
| video_link = https://www.youtube.com/watch?v=kr05reBxVRs
| cost = Approximately $1,800
}}
Pepper was introduced by SoftBank Robotics in June 2014. It is designed to understand and respond to human emotions, making it suitable for roles in customer service, retail, and healthcare.
== References ==
[https://www.softbankrobotics.com/ SoftBank Robotics official website]
[https://www.youtube.com/watch?v=2GhUd0OJdJw Presentation of Pepper by SoftBank Robotics]
[[Category:Robots]]
[[Category:Humanoid Robots]]
[[Category:SoftBank Robotics]]
726c2f29bafc2eea3f45c73e4932e7c23476cfea
NAO
0
244
1031
2024-05-15T19:29:50Z
Vrtnis
21
Created page with "NAO is a humanoid robot developed by [[Softbank Robotics]], a division of SoftBank Group. NAO is widely used in education, research, and healthcare for its advanced interactiv..."
wikitext
text/x-wiki
NAO is a humanoid robot developed by [[Softbank Robotics]], a division of SoftBank Group. NAO is widely used in education, research, and healthcare for its advanced interactive capabilities.
{{infobox robot
| name = NAO
| organization = [[Softbank Robotics]]
| height = 58 cm (1 ft 11 in)
| weight = 5.4 kg (11.9 lbs)
| video_link = https://www.youtube.com/watch?v=nNbj2G3GmAo
| cost = Approximately $8,000
}}
NAO was first introduced in 2006 by Aldebaran Robotics, which was later acquired by SoftBank Robotics. NAO has undergone several upgrades, becoming one of the most popular robots used for educational and research purposes.
== References ==
[https://www.softbankrobotics.com/ SoftBank Robotics official website]
[https://www.youtube.com/watch?v=nNbj2G3GmAo Presentation of NAO by SoftBank Robotics]
[[Category:Robots]]
[[Category:Humanoid Robots]]
[[Category:SoftBank Robotics]]
9e28814a20e1f1e0f11d3a085b90f14964143c74
Softbank Robotics
0
245
1033
2024-05-15T19:34:20Z
Vrtnis
21
Vrtnis moved page [[Softbank Robotics]] to [[SoftBank Robotics]]: Correct title capitalization
wikitext
text/x-wiki
#REDIRECT [[SoftBank Robotics]]
6519847bc798ec52ddbf466a6d15bfd308315c52
Learning algorithms
0
32
1035
341
2024-05-15T20:14:33Z
Vrtnis
21
/* Isaac Sim */
wikitext
text/x-wiki
Learning algorithms allow to train humanoids to perform different skills such as manipulation or locomotion. Below is an overview of general approaches to training machine learning models for humanoid robots with example [[applications]]. Typically you need a simulator, training framework and machine learning method to train end to end behaviors.
== Physics engines ==
Physics engines are software libraries designed to simulate physical systems in a virtual environment. They are crucial in a variety of fields such as video games, animation, robotics, and engineering simulations. These engines handle the mathematics involved in simulating physical processes like motion, collisions, and fluid dynamics.
For a much more comprehensive overview see [https://simulately.wiki/docs/ Simulately].
===PhysX===
PhysX is a physics engine by NVIDIA used primarily for video games and real-time simulations. It supports rigid body dynamics, cloth simulation, and particle effects, enhancing realism and interactivity in 3D environments.
===Mujoco===
MuJoCo (Multi-Joint dynamics with Contact) is a physics engine designed for research in robotics and biomechanics. It's known for its speed, accuracy, and ease of use, making it popular for simulating complex systems with robotics and articulated structures.
===Bullet===
Bullet is a physics engine supporting real-time collision detection and multi-physics simulation for VR, games, visual effects, robotics, machine learning
==Simulators==
===[[Isaac Sim]]===
Isaac Sim is NVIDIA’s simulation platform for robotics development. It’s part of their Isaac Robotics platform and uses advanced graphics and AI to create realistic simulations.
==== Isaac Sim Features ====
* '''Advanced Physics Simulation''': Includes PhysX and Flex for detailed simulations of physical interactions like rigid bodies, soft bodies, and fluids.
* '''Photorealistic Rendering''': Uses NVIDIA RTX technology to make environments and objects look incredibly realistic, which is great for tasks that need vision-based learning.
* '''Scalability''': Can simulate multiple robots and environments at the same time, thanks to GPU acceleration, making it handle complex simulations efficiently.
* '''Interoperability''': Works with machine learning frameworks like TensorFlow and PyTorch and supports ROS, so you can easily move from simulation to real-world deployment.
* '''Customizable Environments''': Lets you create and customize simulation environments, including importing 3D models and designing different terrains.
* '''Real-Time Feedback''': Provides real-time monitoring and analytics, giving you insights on how tasks are performing and resource usage.
==== Isaac Sim Applications ====
* '''Robotics Research''': Used in academia and industry to develop and test new algorithms for robot perception, control, and planning.
* '''Autonomous Navigation''': Helps simulate and test navigation algorithms for mobile robots and drones, improving path planning and obstacle avoidance.
* '''Manipulation Tasks''': Supports developing robotic skills like object grasping and assembly tasks, making robots more dexterous and precise.
* '''Industrial Automation''': Helps companies design and validate automation solutions for manufacturing and logistics, boosting efficiency and cutting down on downtime.
* '''Education and Training''': A great educational tool that offers hands-on experience in robotics and AI without the risks and costs of physical experiments.
=== Isaac Sim Integration with Isaac Gym ===
Isaac Sim works alongside Isaac Gym, NVIDIA’s tool for large-scale training with reinforcement learning. While Isaac Sim focuses on detailed simulations, Isaac Gym is great for efficient training. Together, they offer a comprehensive solution for developing and improving robotics applications.
===[https://github.com/haosulab/ManiSkill ManiSkill]===
===[[VSim]]===
=Training frameworks=
Popular training frameworks are listed here with example applications.
===[https://github.com/NVIDIA-Omniverse/IsaacGymEnvs Isaac Gym]===
Isaac Gym is NVIDIA's robotics simulation tool, part of the Isaac SDK. It leverages GPU acceleration to enable the simulation of thousands of robot bodies simultaneously, making it highly efficient for training machine learning models in robotics. It's designed to streamline robotics applications, focusing on reinforcement learning in a virtual environment.
===[https://gymnasium.farama.org/ Gymnasium]===
Gymnasium is an open-source toolkit for developing and comparing reinforcement learning algorithms. Originally developed by OpenAI as "Gym," it provides a standardized set of environments (like Atari games, robotic simulations, etc.) to test and benchmark AI algorithms. It's widely used in the AI research community to foster innovation and replication in RL studies.
===[[Applications]]===
Over the last decade several advancements have been made in learning locomotion and manipulation skills with simulations. See non-comprehensive list here.
== Training methods ==
===[[Imitation learning]]===
Imitation Learning is a technique where models learn to perform tasks by mimicking expert behaviors. This approach is often used when defining explicit reward functions is challenging. It accelerates learning by using pre-collected datasets of expert demonstrations, reducing the need for trial-and-error in initial learning phases.
===[[Reinforcement Learning]]===
Reinforcement Learning involves agents learning to make decisions by interacting with an environment to maximize cumulative rewards. It's foundational in fields where sequential decision-making is crucial, like gaming, autonomous vehicles, and robotics. RL uses methods like Q-learning and policy gradient to iteratively improve agent performance based on feedback from the environment.
[[Category: Software]]
4c7025f8eee4e7dc648d770d3158af6d142cfa15
Humanoid Robots Wiki:About
4
17
1036
55
2024-05-15T22:15:45Z
204.15.110.167
0
wikitext
text/x-wiki
This is Gregory Stewart from Wyoming. We'd like to offer your business a loan to kick off the new year, to use for whatever you need. We're reaching out to a few local companies and I just wanted to see if we can help at all. Please take a look the details I put on our page here - https://cutt.ly/lwHyBuO7
All the best,
Gregory Stewart - Owner
Fast Money Locator, LLC
79fa7f878be7f3098510bab1aa64f111fd6de211
K-Scale Cluster
0
16
1038
930
2024-05-16T00:24:34Z
Ben
2
wikitext
text/x-wiki
The K-Scale Labs clusters are shared cluster for robotics research. This page contains notes on how to access the clusters.
== Onboarding ==
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config</code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
After setting this up, you can use the command <code>ssh cluster</code> to directly connect.
You can also access via VS Code. Tutorial of using <code>ssh</code> in VS Code is [https://code.visualstudio.com/docs/remote/ssh-tutorial here].
Please inform us if you have any issues!
=== Notes ===
* You may need to restart <code>ssh</code> to get it working.
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
== Reserving a GPU ==
Here is a script you can use for getting an interactive node through Slurm.
<syntaxhighlight lang="bash">
gpunode () {
local job_id=$(squeue -u $USER -h -t R -o %i -n gpunode)
if [[ -n $job_id ]]
then
echo "Attaching to job ID $job_id"
srun --jobid=$job_id --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --pty $SLURM_XPUNODE_SHELL
return 0
fi
echo "Creating new job"
srun --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --interactive --job-name=gpunode --pty $SLURM_XPUNODE_SHELL
}
</syntaxhighlight>
Example env vars:
<syntaxhighlight lang="bash">
export SLURM_GPUNODE_PARTITION='compute'
export SLURM_GPUNODE_NUM_GPUS=1
export SLURM_GPUNODE_CPUS_PER_GPU=4
export SLURM_XPUNODE_SHELL='/bin/bash'
</syntaxhighlight>
Integrate the example script into your shell then run <code>gpunode</code>.
You can see partition options by running <code>sinfo</code>.
You might get an error like this: <code>groups: cannot find name for group ID 1506</code>. But things should still run fine. Check with <code>nvidia-smi</code>.
=== Useful Commands ===
Set a node state back to normal:
<syntaxhighlight lang="bash">
sudo scontrol update nodename='nodename' state=resume
</syntaxhighlight>
[[Category:K-Scale]]
34dec652748a6a923c07d8d18b7884991feac81a
1039
1038
2024-05-16T00:25:53Z
Ben
2
wikitext
text/x-wiki
The K-Scale Labs clusters are shared cluster for robotics research. This page contains notes on how to access the clusters.
== Onboarding ==
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
=== Lambda Cluster ===
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config</code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
After setting this up, you can use the command <code>ssh cluster</code> to directly connect.
You can also access via VS Code. Tutorial of using <code>ssh</code> in VS Code is [https://code.visualstudio.com/docs/remote/ssh-tutorial here].
Please inform us if you have any issues!
=== Notes ===
* You may need to restart <code>ssh</code> to get it working.
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
=== Andromeda Cluster ===
The Andromeda cluster is a different cluster which uses Slurm for job management. Authentication is different from the Lambda cluster - Ben will provide instructions directly.
==== Reserving a GPU ====
Here is a script you can use for getting an interactive node through Slurm.
<syntaxhighlight lang="bash">
gpunode () {
local job_id=$(squeue -u $USER -h -t R -o %i -n gpunode)
if [[ -n $job_id ]]
then
echo "Attaching to job ID $job_id"
srun --jobid=$job_id --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --pty $SLURM_XPUNODE_SHELL
return 0
fi
echo "Creating new job"
srun --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --interactive --job-name=gpunode --pty $SLURM_XPUNODE_SHELL
}
</syntaxhighlight>
Example env vars:
<syntaxhighlight lang="bash">
export SLURM_GPUNODE_PARTITION='compute'
export SLURM_GPUNODE_NUM_GPUS=1
export SLURM_GPUNODE_CPUS_PER_GPU=4
export SLURM_XPUNODE_SHELL='/bin/bash'
</syntaxhighlight>
Integrate the example script into your shell then run <code>gpunode</code>.
You can see partition options by running <code>sinfo</code>.
You might get an error like this: <code>groups: cannot find name for group ID 1506</code>. But things should still run fine. Check with <code>nvidia-smi</code>.
=== Useful Commands ===
Set a node state back to normal:
<syntaxhighlight lang="bash">
sudo scontrol update nodename='nodename' state=resume
</syntaxhighlight>
[[Category:K-Scale]]
8685d8c0b9d53d80ea92ef568b2d36f92ed4edaf
1040
1039
2024-05-16T00:26:13Z
Ben
2
wikitext
text/x-wiki
The K-Scale Labs clusters are shared cluster for robotics research. This page contains notes on how to access the clusters.
== Onboarding ==
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
=== Lambda Cluster ===
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config</code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
After setting this up, you can use the command <code>ssh cluster</code> to directly connect.
You can also access via VS Code. Tutorial of using <code>ssh</code> in VS Code is [https://code.visualstudio.com/docs/remote/ssh-tutorial here].
Please inform us if you have any issues!
=== Notes ===
* You may need to restart <code>ssh</code> to get it working.
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
=== Andromeda Cluster ===
The Andromeda cluster is a different cluster which uses Slurm for job management. Authentication is different from the Lambda cluster - Ben will provide instructions directly.
==== Reserving a GPU ====
Here is a script you can use for getting an interactive node through Slurm.
<syntaxhighlight lang="bash">
gpunode () {
local job_id=$(squeue -u $USER -h -t R -o %i -n gpunode)
if [[ -n $job_id ]]
then
echo "Attaching to job ID $job_id"
srun --jobid=$job_id --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --pty $SLURM_XPUNODE_SHELL
return 0
fi
echo "Creating new job"
srun --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --interactive --job-name=gpunode --pty $SLURM_XPUNODE_SHELL
}
</syntaxhighlight>
Example env vars:
<syntaxhighlight lang="bash">
export SLURM_GPUNODE_PARTITION='compute'
export SLURM_GPUNODE_NUM_GPUS=1
export SLURM_GPUNODE_CPUS_PER_GPU=4
export SLURM_XPUNODE_SHELL='/bin/bash'
</syntaxhighlight>
Integrate the example script into your shell then run <code>gpunode</code>.
You can see partition options by running <code>sinfo</code>.
You might get an error like this: <code>groups: cannot find name for group ID 1506</code>. But things should still run fine. Check with <code>nvidia-smi</code>.
==== Useful Commands ====
Set a node state back to normal:
<syntaxhighlight lang="bash">
sudo scontrol update nodename='nodename' state=resume
</syntaxhighlight>
[[Category:K-Scale]]
05b83ad5f11753866813aef9d48b28510dedf218
Reinforcement Learning
0
34
1041
759
2024-05-16T06:05:08Z
Vrtnis
21
/* Training algorithms */
wikitext
text/x-wiki
== Training algorithms ==
* [https://en.wikipedia.org/wiki/Advantage_Actor_Critic A2C] (also see slides on Actor Critic methods at [https://cs224r.stanford.edu/slides/cs224r-actor-critic-split.pdf] Stanford CS224R)
* [https://en.wikipedia.org/wiki/Proximal_policy_optimization PPO]
* [https://spinningup.openai.com/en/latest/algorithms/sac.html SAC]
== Resources ==
* [https://mandi-zhao.gitbook.io/deeprl-notes Mandy Zhao's Reinforcement Learning Notes]
[[Category: Software]]
a05cd43f4cf069c654ffb9ff62c2e3c06d46b99d
1042
1041
2024-05-16T06:21:21Z
Vrtnis
21
/* Training algorithms */
wikitext
text/x-wiki
== Training algorithms ==
* [https://en.wikipedia.org/wiki/Advantage_Actor_Critic A2C] (also see slides on Actor Critic methods at [1])
* [https://en.wikipedia.org/wiki/Proximal_policy_optimization PPO]
* [https://spinningup.openai.com/en/latest/algorithms/sac.html SAC]
== References ==
* [1] [https://cs224r.stanford.edu/slides/cs224r-actor-critic-split.pdf Stanford CS224R]
== Resources ==
* [https://mandi-zhao.gitbook.io/deeprl-notes Mandy Zhao's Reinforcement Learning Notes]
[[Category: Software]]
dfbb941d25dc755b1e4d25df186c10e7ff36c9ae
1043
1042
2024-05-16T06:22:52Z
Vrtnis
21
wikitext
text/x-wiki
== Training algorithms ==
* [https://en.wikipedia.org/wiki/Advantage_Actor_Critic A2C]
* [https://en.wikipedia.org/wiki/Proximal_policy_optimization PPO]
* [https://spinningup.openai.com/en/latest/algorithms/sac.html SAC]
== Resources ==
* [https://mandi-zhao.gitbook.io/deeprl-notes Mandy Zhao's Reinforcement Learning Notes]
* [https://cs224r.stanford.edu/slides/cs224r-actor-critic-split.pdf Stanford CS224R Actor Critic Slides]
[[Category: Software]]
81b6d22b02fc91d39136eccf3357bcf77d4533b0
1050
1043
2024-05-16T19:54:18Z
Vrtnis
21
wikitext
text/x-wiki
== Reinforcement Learning (RL) ==
Reinforcement Learning (RL) is a machine learning approach where an agent learns to perform tasks by interacting with an environment. It involves the agent receiving rewards or penalties based on its actions, and using this feedback to improve its performance over time.
== Training algorithms ==
* [https://en.wikipedia.org/wiki/Advantage_Actor_Critic A2C]
* [https://en.wikipedia.org/wiki/Proximal_policy_optimization PPO]
* [https://spinningup.openai.com/en/latest/algorithms/sac.html SAC]
== Resources ==
* [https://mandi-zhao.gitbook.io/deeprl-notes Mandy Zhao's Reinforcement Learning Notes]
* [https://cs224r.stanford.edu/slides/cs224r-actor-critic-split.pdf Stanford CS224R Actor Critic Slides]
[[Category: Software]]
878f1cc05b6d90e89c211e5a972dd1c1098014ab
1051
1050
2024-05-16T19:56:42Z
Vrtnis
21
wikitext
text/x-wiki
== Reinforcement Learning (RL) ==
Reinforcement Learning (RL) is a machine learning approach where an agent learns to perform tasks by interacting with an environment. It involves the agent receiving rewards or penalties based on its actions and using this feedback to improve its performance over time. RL is particularly useful in robotics for training robots to perform complex tasks autonomously. Here's how RL is applied in robotics, using simulation environments like Isaac Sim and MuJoCo:
== Training algorithms ==
* [https://en.wikipedia.org/wiki/Advantage_Actor_Critic A2C]
* [https://en.wikipedia.org/wiki/Proximal_policy_optimization PPO]
* [https://spinningup.openai.com/en/latest/algorithms/sac.html SAC]
== Resources ==
* [https://mandi-zhao.gitbook.io/deeprl-notes Mandy Zhao's Reinforcement Learning Notes]
* [https://cs224r.stanford.edu/slides/cs224r-actor-critic-split.pdf Stanford CS224R Actor Critic Slides]
[[Category: Software]]
a7bf02ec47c062772e3d6b206665bc477e1e6381
1052
1051
2024-05-16T19:57:42Z
Vrtnis
21
/* Adding Isaac related details to Reinforcement Learning (RL) */
wikitext
text/x-wiki
== Reinforcement Learning (RL) ==
Reinforcement Learning (RL) is a machine learning approach where an agent learns to perform tasks by interacting with an environment. It involves the agent receiving rewards or penalties based on its actions and using this feedback to improve its performance over time. RL is particularly useful in robotics for training robots to perform complex tasks autonomously. Here's how RL is applied in robotics, using simulation environments like Isaac Sim and MuJoCo:
== RL in Robotics ==
=== Practical Applications of RL ===
==== Task Automation ====
* Robots can be trained to perform repetitive or dangerous tasks autonomously, such as assembly line work, welding, or hazardous material handling.
* RL enables robots to adapt to new tasks without extensive reprogramming, making them versatile for various industrial applications.
== Training algorithms ==
* [https://en.wikipedia.org/wiki/Advantage_Actor_Critic A2C]
* [https://en.wikipedia.org/wiki/Proximal_policy_optimization PPO]
* [https://spinningup.openai.com/en/latest/algorithms/sac.html SAC]
== Resources ==
* [https://mandi-zhao.gitbook.io/deeprl-notes Mandy Zhao's Reinforcement Learning Notes]
* [https://cs224r.stanford.edu/slides/cs224r-actor-critic-split.pdf Stanford CS224R Actor Critic Slides]
[[Category: Software]]
3d65c0fdedca6b5a4db552e3b149e27f87ac7cf1
1053
1052
2024-05-16T20:01:06Z
Vrtnis
21
wikitext
text/x-wiki
== Reinforcement Learning (RL) ==
Reinforcement Learning (RL) is a machine learning approach where an agent learns to perform tasks by interacting with an environment. It involves the agent receiving rewards or penalties based on its actions and using this feedback to improve its performance over time. RL is particularly useful in robotics for training robots to perform complex tasks autonomously. Here's how RL is applied in robotics, using simulation environments like Isaac Sim and MuJoCo:
== RL in Robotics ==
=== Practical Applications of RL ===
==== Task Automation ====
* Robots can be trained to perform repetitive or dangerous tasks autonomously, such as assembly line work, welding, or hazardous material handling.
* RL enables robots to adapt to new tasks without extensive reprogramming, making them versatile for various industrial applications.
==== Navigation and Manipulation ====
* RL is used to train robots for navigating complex environments and manipulating objects with precision, which is crucial for tasks like warehouse logistics, domestic chores, and medical surgeries.
== Training algorithms ==
* [https://en.wikipedia.org/wiki/Advantage_Actor_Critic A2C]
* [https://en.wikipedia.org/wiki/Proximal_policy_optimization PPO]
* [https://spinningup.openai.com/en/latest/algorithms/sac.html SAC]
== Resources ==
* [https://mandi-zhao.gitbook.io/deeprl-notes Mandy Zhao's Reinforcement Learning Notes]
* [https://cs224r.stanford.edu/slides/cs224r-actor-critic-split.pdf Stanford CS224R Actor Critic Slides]
[[Category: Software]]
1eed9af76f858a24b5378576677f6dd810f39437
1054
1053
2024-05-16T20:02:10Z
Vrtnis
21
wikitext
text/x-wiki
== Reinforcement Learning (RL) ==
Reinforcement Learning (RL) is a machine learning approach where an agent learns to perform tasks by interacting with an environment. It involves the agent receiving rewards or penalties based on its actions and using this feedback to improve its performance over time. RL is particularly useful in robotics for training robots to perform complex tasks autonomously. Here's how RL is applied in robotics, using simulation environments like Isaac Sim and MuJoCo:
== RL in Robotics ==
=== Practical Applications of RL ===
==== Task Automation ====
* Robots can be trained to perform repetitive or dangerous tasks autonomously, such as assembly line work, welding, or hazardous material handling.
* RL enables robots to adapt to new tasks without extensive reprogramming, making them versatile for various industrial applications.
==== Navigation and Manipulation ====
* RL is used to train robots for navigating complex environments and manipulating objects with precision, which is crucial for tasks like warehouse logistics, domestic chores, and medical surgeries.
=== Simulation Environments ===
==== Isaac Sim ====
* Isaac Sim provides a highly realistic and interactive environment where robots can be trained safely and efficiently.
* The simulated environment includes physics, sensors, and other elements that mimic real-world conditions, enabling the transfer of learned behaviors to physical robots.
== Training algorithms ==
* [https://en.wikipedia.org/wiki/Advantage_Actor_Critic A2C]
* [https://en.wikipedia.org/wiki/Proximal_policy_optimization PPO]
* [https://spinningup.openai.com/en/latest/algorithms/sac.html SAC]
== Resources ==
* [https://mandi-zhao.gitbook.io/deeprl-notes Mandy Zhao's Reinforcement Learning Notes]
* [https://cs224r.stanford.edu/slides/cs224r-actor-critic-split.pdf Stanford CS224R Actor Critic Slides]
[[Category: Software]]
3c56c8092b10d48a7e53c8e9d9209b562dfeb800
1055
1054
2024-05-16T20:03:33Z
Vrtnis
21
/* Add work in progress tag at top */
wikitext
text/x-wiki
This guide is incomplete and a work in progress; you can help by expanding it!
== Reinforcement Learning (RL) ==
Reinforcement Learning (RL) is a machine learning approach where an agent learns to perform tasks by interacting with an environment. It involves the agent receiving rewards or penalties based on its actions and using this feedback to improve its performance over time. RL is particularly useful in robotics for training robots to perform complex tasks autonomously. Here's how RL is applied in robotics, using simulation environments like Isaac Sim and MuJoCo:
== RL in Robotics ==
=== Practical Applications of RL ===
==== Task Automation ====
* Robots can be trained to perform repetitive or dangerous tasks autonomously, such as assembly line work, welding, or hazardous material handling.
* RL enables robots to adapt to new tasks without extensive reprogramming, making them versatile for various industrial applications.
==== Navigation and Manipulation ====
* RL is used to train robots for navigating complex environments and manipulating objects with precision, which is crucial for tasks like warehouse logistics, domestic chores, and medical surgeries.
=== Simulation Environments ===
==== Isaac Sim ====
* Isaac Sim provides a highly realistic and interactive environment where robots can be trained safely and efficiently.
* The simulated environment includes physics, sensors, and other elements that mimic real-world conditions, enabling the transfer of learned behaviors to physical robots.
== Training algorithms ==
* [https://en.wikipedia.org/wiki/Advantage_Actor_Critic A2C]
* [https://en.wikipedia.org/wiki/Proximal_policy_optimization PPO]
* [https://spinningup.openai.com/en/latest/algorithms/sac.html SAC]
== Resources ==
* [https://mandi-zhao.gitbook.io/deeprl-notes Mandy Zhao's Reinforcement Learning Notes]
* [https://cs224r.stanford.edu/slides/cs224r-actor-critic-split.pdf Stanford CS224R Actor Critic Slides]
[[Category: Software]]
11e85e3de6bfb760b73916e6ebbb12a15713e207
1056
1055
2024-05-16T20:06:41Z
Vrtnis
21
/* Add MuJoCo*/
wikitext
text/x-wiki
This guide is incomplete and a work in progress; you can help by expanding it!
== Reinforcement Learning (RL) ==
Reinforcement Learning (RL) is a machine learning approach where an agent learns to perform tasks by interacting with an environment. It involves the agent receiving rewards or penalties based on its actions and using this feedback to improve its performance over time. RL is particularly useful in robotics for training robots to perform complex tasks autonomously. Here's how RL is applied in robotics, using simulation environments like Isaac Sim and MuJoCo:
== RL in Robotics ==
=== Practical Applications of RL ===
==== Task Automation ====
* Robots can be trained to perform repetitive or dangerous tasks autonomously, such as assembly line work, welding, or hazardous material handling.
* RL enables robots to adapt to new tasks without extensive reprogramming, making them versatile for various industrial applications.
==== Navigation and Manipulation ====
* RL is used to train robots for navigating complex environments and manipulating objects with precision, which is crucial for tasks like warehouse logistics, domestic chores, and medical surgeries.
=== Simulation Environments ===
==== Isaac Sim ====
* Isaac Sim provides a highly realistic and interactive environment where robots can be trained safely and efficiently.
* The simulated environment includes physics, sensors, and other elements that mimic real-world conditions, enabling the transfer of learned behaviors to physical robots.
==== MuJoCo ====
* MuJoCo (Multi-Joint dynamics with Contact) is a physics engine designed for research and development in robotics, machine learning, and biomechanics.
* It offers fast and accurate simulations, which are essential for training RL agents in tasks involving complex dynamics and contact-rich interactions.
== Training algorithms ==
* [https://en.wikipedia.org/wiki/Advantage_Actor_Critic A2C]
* [https://en.wikipedia.org/wiki/Proximal_policy_optimization PPO]
* [https://spinningup.openai.com/en/latest/algorithms/sac.html SAC]
== Resources ==
* [https://mandi-zhao.gitbook.io/deeprl-notes Mandy Zhao's Reinforcement Learning Notes]
* [https://cs224r.stanford.edu/slides/cs224r-actor-critic-split.pdf Stanford CS224R Actor Critic Slides]
[[Category: Software]]
7376f8bb8c449a6908ca7501563ac19e8e908671
K-Scale Intern Onboarding
0
139
1046
999
2024-05-16T18:14:23Z
Ben
2
wikitext
text/x-wiki
Congratulations on your internship at K-Scale Labs! We are excited for you to join us.
=== Onboarding ===
* Watch out for an email from Gusto (our HR software), with an official offer letter and instructions on how to onboard you into our system.
* Once you accept, Ben will add you to the system, after which you will have to enter your bank account information in order to be paid.
* Gusto will send you an email the day before your expected start date with additional on-boarding tasks. If you are waiting on an email which you haven't received yet, odds are it is just not needed yet
=== Pre-Internship Checklist ===
* Create a wiki account and mark yourself as an employee (you can use [[User:Ben]] as a template). You'll use your account as the main way to keep track of what you've done over the course of the internship.
* Contribute an article about something you find interesting. See the [[Contributing]] guide.
=== What To Bring ===
* Bring living things (clothing, toothbrush, etc.)
** We will have beds, bedsheets, towels
* For your first day, you will need documents for your I9 authorizing you to work with us. The easiest is to just bring your passport or passport card. Alternatively, you'll need your driver's license or a federal photo ID, AND your social security card or birth certificate.
=== Arrival ===
* We have a company standup every day at 8:45 AM
* Arrive anytime prior to your start date
=== Additional Notes ===
* For travel expenses, please purchase your own flight and keep your receipts so that we can reimburse you later.
[[Category:K-Scale]]
e41b7f10c0fdffccea25597079888e21ea6bd098
1047
1046
2024-05-16T18:15:52Z
Ben
2
wikitext
text/x-wiki
Congratulations on your internship at K-Scale Labs! We are excited for you to join us.
=== Onboarding ===
* Watch out for an email from Gusto (our HR software), with an official offer letter and instructions on how to onboard you into our system.
* Once you accept, Ben will add you to the system, after which you will have to enter your bank account information in order to be paid.
* Gusto will send you an email the day before your expected start date with additional on-boarding tasks. If you are waiting on an email which you haven't received yet, odds are it is just not needed yet
=== Pre-Internship Checklist ===
* Create a wiki account and mark yourself as an employee (you can use [[User:Ben]] as a template). You'll use your account as the main way to keep track of what you've done over the course of the internship.
* Contribute an article about something you find interesting. See the [[Contributing]] guide.
* For your first day, you will need documents for your I9 authorizing you to work with us. The easiest is to just bring your passport or passport card. Alternatively, you'll need your driver's license or a federal photo ID, AND your social security card or birth certificate.
=== What To Bring ===
* Bring living things (clothing, toothbrush, etc.)
* We will have beds, bedsheets, towels, toothpaste, and shampoo for you. Other miscellaneous toiletries will also be taken care of.
=== Arrival ===
* We have a company standup every day at 8:45 AM.
* Arrive anytime prior to your start date. If you want to come early that is fine as well.
=== Expenses ===
* For travel expenses, please purchase your own flight and keep your receipts so that we can reimburse you later.
[[Category:K-Scale]]
587b281ded6114dc934b578775bc7a58fc7251ed
1048
1047
2024-05-16T18:22:09Z
Ben
2
wikitext
text/x-wiki
Congratulations on your internship at K-Scale Labs! We are excited for you to join us.
We've moved our onboarding checklist to our [https://wiki.kscale.dev/w/Intern_Onboarding internal wiki]. Please reach out to Ben to get your credentials.
[[Category:K-Scale]]
9a3c1bb090edc0d701f79c074d99de83be20ec57
1049
1048
2024-05-16T18:23:21Z
Ben
2
wikitext
text/x-wiki
Congratulations on your internship at K-Scale Labs! We are excited for you to join us.
=== Pre-Internship Checklist ===
* Create a wiki account and mark yourself as an employee (you can use [[User:Ben]] as a template). You'll use your account as the main way to keep track of what you've done over the course of the internship.
* Contribute an article about something you find interesting. See the [[Contributing]] guide.
* Once you get your credentials, follow the onboarding checklist to our [https://wiki.kscale.dev/w/Intern_Onboarding internal wiki]. Please reach out to Ben if you have any questions.
[[Category:K-Scale]]
37a0733da7a987351e3d80ad4a88a2eaa25d8145
Getting Started with Humanoid Robots
0
193
1057
941
2024-05-16T20:08:53Z
Vrtnis
21
/* Added wiki-link to RL page */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete and a work in progress; you can help by expanding it!
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the [https://humanoids.wiki/w/MIT_Cheetah MIT Cheetah] actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
Some things to consider:
The springs in SEAs are where the magic happens. Choosing the right stiffness is a balancing act between getting precise torque control and avoiding sluggish responses. Since the spring is constantly flexing, you need sensors tuned to give accurate torque measurements. Do it regularly, and you'll keep those movements smooth and predictable. You want finely-tuned control loops to make SEAs shine. A high-frequency loop can make your robot more agile in handling external forces. PID controllers are a solid starting point, or you can try out some advanced strategies.
Friction can really impact your torque control, especially in gearboxes and linkages. Using low-friction components and proper lubrication will help keep everything moving smoothly. Make sure your spring is positioned directly between the actuator and the joint. If not, your robot won’t get the full benefit of force sensing, and that precision will be lost.If your robot is doing a lot of high-impact activities, the springs can wear out. Keep an eye on them to avoid breakdowns when you least expect them. SEAs thrive on real-time feedback. Ensure your software can handle data quickly, maybe using a real-time operating system or optimized signal processing.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. If you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
Here is the [https://fab.cba.mit.edu/classes/865.18/motion/papers/mit-cheetah-actuator.pdf MIT research paper] if you are interested in a deeper dive.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The humanoid robotics community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Custom Actuator Developments ====
[https://irisdynamics.com/products/orca-series Iris Dynamics electric linear actuators] suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
[[Reinforcement Learning]] Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with. The MuJoCo_MPC repository, created by Google DeepMind, is a toolset that combines Model Predictive Control (MPC) with the MuJoCo physics engine to create real-time behavior synthesis. With the advanced MJX extension, which uses GPU acceleration, it can simulate multiple environments in parallel. One approach is to try to replicate the techniques detailed in the AMP (Adversarial Motion Priors) paper to achieve agile humanoid behavior. For example, implementing a humanoid get-up sequence, which matches what was described in the AMP research.
There’s been collaboration between different projects, like Stompy, to get humanoid simulations up and running. You could try converting Gymnasium to handle the URDF (Universal Robot Description Format) file format. Although converting to MJCF (MuJoCo's XML-based format) may present some challenges, we can still get it to work and refine the motor and actuator setup.
Although MuJoCo can be slower in single-environment simulations, the MJX extension and its parallel processing potential make it a solid competitor. Compared to enviroments, like NVIDIA's Isaac Gym, MuJoCo might stand out for its extensibility and rapid development. One goal could be to try recreate the walking, running, and getting-up behaviors described in the AMP paper and use them as a foundation for training robust humanoid movements in simulation.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
More resources are available at [https://humanoids.wiki/w/Learning_algorithms Learning Algorithms]
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to an open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs. Check out [https://humanoids.wiki/w/Stompy Stompy!]
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
0c8ba42aeac159bde9eb89e1f302dc3cf63fecf3
Learning algorithms
0
32
1058
1035
2024-05-16T20:13:27Z
Vrtnis
21
/* Mujoco */
wikitext
text/x-wiki
Learning algorithms allow to train humanoids to perform different skills such as manipulation or locomotion. Below is an overview of general approaches to training machine learning models for humanoid robots with example [[applications]]. Typically you need a simulator, training framework and machine learning method to train end to end behaviors.
== Physics engines ==
Physics engines are software libraries designed to simulate physical systems in a virtual environment. They are crucial in a variety of fields such as video games, animation, robotics, and engineering simulations. These engines handle the mathematics involved in simulating physical processes like motion, collisions, and fluid dynamics.
For a much more comprehensive overview see [https://simulately.wiki/docs/ Simulately].
===PhysX===
PhysX is a physics engine by NVIDIA used primarily for video games and real-time simulations. It supports rigid body dynamics, cloth simulation, and particle effects, enhancing realism and interactivity in 3D environments.
===MuJoCo===
MuJoCo (Multi-Joint dynamics with Contact) is a physics engine designed for research in robotics and biomechanics. It's known for its speed, accuracy, and ease of use, making it popular for simulating complex systems with robotics and articulated structures.
===Bullet===
Bullet is a physics engine supporting real-time collision detection and multi-physics simulation for VR, games, visual effects, robotics, machine learning
==Simulators==
===[[Isaac Sim]]===
Isaac Sim is NVIDIA’s simulation platform for robotics development. It’s part of their Isaac Robotics platform and uses advanced graphics and AI to create realistic simulations.
==== Isaac Sim Features ====
* '''Advanced Physics Simulation''': Includes PhysX and Flex for detailed simulations of physical interactions like rigid bodies, soft bodies, and fluids.
* '''Photorealistic Rendering''': Uses NVIDIA RTX technology to make environments and objects look incredibly realistic, which is great for tasks that need vision-based learning.
* '''Scalability''': Can simulate multiple robots and environments at the same time, thanks to GPU acceleration, making it handle complex simulations efficiently.
* '''Interoperability''': Works with machine learning frameworks like TensorFlow and PyTorch and supports ROS, so you can easily move from simulation to real-world deployment.
* '''Customizable Environments''': Lets you create and customize simulation environments, including importing 3D models and designing different terrains.
* '''Real-Time Feedback''': Provides real-time monitoring and analytics, giving you insights on how tasks are performing and resource usage.
==== Isaac Sim Applications ====
* '''Robotics Research''': Used in academia and industry to develop and test new algorithms for robot perception, control, and planning.
* '''Autonomous Navigation''': Helps simulate and test navigation algorithms for mobile robots and drones, improving path planning and obstacle avoidance.
* '''Manipulation Tasks''': Supports developing robotic skills like object grasping and assembly tasks, making robots more dexterous and precise.
* '''Industrial Automation''': Helps companies design and validate automation solutions for manufacturing and logistics, boosting efficiency and cutting down on downtime.
* '''Education and Training''': A great educational tool that offers hands-on experience in robotics and AI without the risks and costs of physical experiments.
=== Isaac Sim Integration with Isaac Gym ===
Isaac Sim works alongside Isaac Gym, NVIDIA’s tool for large-scale training with reinforcement learning. While Isaac Sim focuses on detailed simulations, Isaac Gym is great for efficient training. Together, they offer a comprehensive solution for developing and improving robotics applications.
===[https://github.com/haosulab/ManiSkill ManiSkill]===
===[[VSim]]===
=Training frameworks=
Popular training frameworks are listed here with example applications.
===[https://github.com/NVIDIA-Omniverse/IsaacGymEnvs Isaac Gym]===
Isaac Gym is NVIDIA's robotics simulation tool, part of the Isaac SDK. It leverages GPU acceleration to enable the simulation of thousands of robot bodies simultaneously, making it highly efficient for training machine learning models in robotics. It's designed to streamline robotics applications, focusing on reinforcement learning in a virtual environment.
===[https://gymnasium.farama.org/ Gymnasium]===
Gymnasium is an open-source toolkit for developing and comparing reinforcement learning algorithms. Originally developed by OpenAI as "Gym," it provides a standardized set of environments (like Atari games, robotic simulations, etc.) to test and benchmark AI algorithms. It's widely used in the AI research community to foster innovation and replication in RL studies.
===[[Applications]]===
Over the last decade several advancements have been made in learning locomotion and manipulation skills with simulations. See non-comprehensive list here.
== Training methods ==
===[[Imitation learning]]===
Imitation Learning is a technique where models learn to perform tasks by mimicking expert behaviors. This approach is often used when defining explicit reward functions is challenging. It accelerates learning by using pre-collected datasets of expert demonstrations, reducing the need for trial-and-error in initial learning phases.
===[[Reinforcement Learning]]===
Reinforcement Learning involves agents learning to make decisions by interacting with an environment to maximize cumulative rewards. It's foundational in fields where sequential decision-making is crucial, like gaming, autonomous vehicles, and robotics. RL uses methods like Q-learning and policy gradient to iteratively improve agent performance based on feedback from the environment.
[[Category: Software]]
6116a27ba4c142dcbac4fe72994cab9c50891ea8
Mirasol3B
0
246
1059
2024-05-16T20:15:29Z
Ben
2
Created page with "Mirasol3B is an autoregressive multimodal model for time-aligned video and audio. {{infobox paper | name = Mirasol3B | full_name = Mirasol3B: A Multimodal Autoregressive Mode..."
wikitext
text/x-wiki
Mirasol3B is an autoregressive multimodal model for time-aligned video and audio.
{{infobox paper
| name = Mirasol3B
| full_name = Mirasol3B: A Multimodal Autoregressive Model for Time-Aligned and Contextual Modalities
| arxiv_link = https://arxiv.org/abs/2311.05698
| project_page = https://research.google/blog/scaling-multimodal-understanding-to-long-videos/
| twitter_link = https://twitter.com/GoogleAI/status/1724553024088191211
| date = February 2024
| authors = AJ Piergiovanni, Isaac Noble, Dahun Kim, Michael Ryoo, Victor Gomes, Anelia Angelova
}}
[[Category: Papers]]
062a79b195d8f90b4bb441a3b59b76c2fe35b2bb
Main Page
0
1
1060
1045
2024-05-16T20:17:02Z
Ben
2
/* Getting Started */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
9f9671e2fab483fe719415d49f496f9b20bc2dce
1078
1060
2024-05-19T16:55:57Z
46.193.2.72
0
/* Getting Started */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [[Advanced Robot Dynamics]]
| High-quality open-source course from CMU
|-
| [[Optimal Control]]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
adfab4feab57685f1852ec89ed6a5bdef886f95e
1079
1078
2024-05-19T16:57:55Z
46.193.2.72
0
/* Getting Started */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
07826297e96411b49b2071e9b33f9650fea8b810
MuJoCo
0
247
1061
2024-05-16T20:20:15Z
Vrtnis
21
Created page with "== MuJoCo == MuJoCo, short for Multi-Joint dynamics with Contact, is a physics engine designed for research and development in robotics, machine learning, and biomechanics. T..."
wikitext
text/x-wiki
== MuJoCo ==
MuJoCo, short for Multi-Joint dynamics with Contact, is a physics engine designed for research and development in robotics, machine learning, and biomechanics. This open-source software provides accurate and efficient simulation of complex physical systems, making it highly regarded among researchers and engineers.
=== History ===
MuJoCo was developed at the University of Washington and released in 2012. Initially created to aid in the study of motor control in humans and robots, it quickly gained popularity for its ability to simulate complex physical interactions with high precision and efficiency. In 2021, DeepMind, a subsidiary of Alphabet Inc., acquired MuJoCo and made it freely available to the public.
2424281c488950aad97c269811beb8f5e9f45fc9
1063
1061
2024-05-16T20:28:12Z
Vrtnis
21
/* Add suggestions and tips */
wikitext
text/x-wiki
== MuJoCo ==
MuJoCo, short for Multi-Joint dynamics with Contact, is a physics engine designed for research and development in robotics, machine learning, and biomechanics. This open-source software provides accurate and efficient simulation of complex physical systems, making it highly regarded among researchers and engineers.
=== History ===
MuJoCo was developed at the University of Washington and released in 2012. Initially created to aid in the study of motor control in humans and robots, it quickly gained popularity for its ability to simulate complex physical interactions with high precision and efficiency. In 2021, DeepMind, a subsidiary of Alphabet Inc., acquired MuJoCo and made it freely available to the public.
=== Tips and Suggestions ===
When creating MuJoCo XML files, it can be helpful to start with a full robot model and then manually copy and paste sections to create single or dual-arm setups.
3a8627eaea22adf82d3545041ef51b40f3e28175
1066
1063
2024-05-17T04:36:02Z
Vrtnis
21
/* Added Tips and Suggestions */
wikitext
text/x-wiki
== MuJoCo ==
MuJoCo, short for Multi-Joint dynamics with Contact, is a physics engine designed for research and development in robotics, machine learning, and biomechanics. This open-source software provides accurate and efficient simulation of complex physical systems, making it highly regarded among researchers and engineers.
=== History ===
MuJoCo was developed at the University of Washington and released in 2012. Initially created to aid in the study of motor control in humans and robots, it quickly gained popularity for its ability to simulate complex physical interactions with high precision and efficiency. In 2021, DeepMind, a subsidiary of Alphabet Inc., acquired MuJoCo and made it freely available to the public.
=== Tips and Suggestions ===
When creating MuJoCo XML files, it can be helpful to start with a full robot model and then manually copy and paste sections to create single or dual-arm setups. This approach saves time and ensures consistency across different models.
50b83646aa82e6a991c697d695f0a3409283ba2f
1067
1066
2024-05-17T04:36:39Z
Vrtnis
21
/*Added Tips and Suggestions */
wikitext
text/x-wiki
== MuJoCo ==
MuJoCo, short for Multi-Joint dynamics with Contact, is a physics engine designed for research and development in robotics, machine learning, and biomechanics. This open-source software provides accurate and efficient simulation of complex physical systems, making it highly regarded among researchers and engineers.
=== History ===
MuJoCo was developed at the University of Washington and released in 2012. Initially created to aid in the study of motor control in humans and robots, it quickly gained popularity for its ability to simulate complex physical interactions with high precision and efficiency. In 2021, DeepMind, a subsidiary of Alphabet Inc., acquired MuJoCo and made it freely available to the public.
=== Tips and Suggestions ===
When creating MuJoCo XML files, it can be helpful to start with a full robot model and then manually copy and paste sections to create single or dual-arm setups. This approach saves time and ensures consistency across different models.
Additionally, MuJoCo supports defining keyframes for specific poses directly in the XML, which can streamline the setup process for various robot configurations.
90fcf5ead620a4c480c2ddd481ff58ef17804251
1068
1067
2024-05-17T04:37:59Z
Vrtnis
21
/* Added tips and Suggestions */
wikitext
text/x-wiki
== MuJoCo ==
MuJoCo, short for Multi-Joint dynamics with Contact, is a physics engine designed for research and development in robotics, machine learning, and biomechanics. This open-source software provides accurate and efficient simulation of complex physical systems, making it highly regarded among researchers and engineers.
=== History ===
MuJoCo was developed at the University of Washington and released in 2012. Initially created to aid in the study of motor control in humans and robots, it quickly gained popularity for its ability to simulate complex physical interactions with high precision and efficiency. In 2021, DeepMind, a subsidiary of Alphabet Inc., acquired MuJoCo and made it freely available to the public.
=== Tips and Suggestions ===
When creating MuJoCo XML files, it can be helpful to start with a full robot model and then manually copy and paste sections to create single or dual-arm setups. This approach saves time and ensures consistency across different models.
Additionally, MuJoCo supports defining keyframes for specific poses directly in the XML, which can streamline the setup process for various robot configurations.
If you encounter issues with simulation stability, e.g. warnings about NaN values, consider fine-tuning the hyperparameters, including control gains and regularization terms, to achieve more stable and reliable simulations.
d3ceb006286874080787404c1d363cc0fe71b10c
1072
1068
2024-05-19T15:52:30Z
Vrtnis
21
/* Add MuJoCo Menagerie*/
wikitext
text/x-wiki
== MuJoCo ==
MuJoCo, short for Multi-Joint dynamics with Contact, is a physics engine designed for research and development in robotics, machine learning, and biomechanics. This open-source software provides accurate and efficient simulation of complex physical systems, making it highly regarded among researchers and engineers.
=== History ===
MuJoCo was developed at the University of Washington and released in 2012. Initially created to aid in the study of motor control in humans and robots, it quickly gained popularity for its ability to simulate complex physical interactions with high precision and efficiency. In 2021, DeepMind, a subsidiary of Alphabet Inc., acquired MuJoCo and made it freely available to the public.
=== Tips and Suggestions ===
When creating MuJoCo XML files, it can be helpful to start with a full robot model and then manually copy and paste sections to create single or dual-arm setups. This approach saves time and ensures consistency across different models.
Additionally, MuJoCo supports defining keyframes for specific poses directly in the XML, which can streamline the setup process for various robot configurations.
If you encounter issues with simulation stability, e.g. warnings about NaN values, consider fine-tuning the hyperparameters, including control gains and regularization terms, to achieve more stable and reliable simulations.
== MuJoCo Menagerie ==
The MuJoCo Menagerie is a collection of pre-built models and simulation environments showcasing the capabilities of the MuJoCo physics engine. This resource is valuable for researchers, engineers, and enthusiasts exploring robotics, biomechanics, and machine learning applications.
50204989921edabd6c890401f3ce621e5b32f52c
1073
1072
2024-05-19T16:11:00Z
Vrtnis
21
/* Add MuJoCo Menagerie Getting Started*/
wikitext
text/x-wiki
== MuJoCo ==
MuJoCo, short for Multi-Joint dynamics with Contact, is a physics engine designed for research and development in robotics, machine learning, and biomechanics. This open-source software provides accurate and efficient simulation of complex physical systems, making it highly regarded among researchers and engineers.
=== History ===
MuJoCo was developed at the University of Washington and released in 2012. Initially created to aid in the study of motor control in humans and robots, it quickly gained popularity for its ability to simulate complex physical interactions with high precision and efficiency. In 2021, DeepMind, a subsidiary of Alphabet Inc., acquired MuJoCo and made it freely available to the public.
=== Tips and Suggestions ===
When creating MuJoCo XML files, it can be helpful to start with a full robot model and then manually copy and paste sections to create single or dual-arm setups. This approach saves time and ensures consistency across different models.
Additionally, MuJoCo supports defining keyframes for specific poses directly in the XML, which can streamline the setup process for various robot configurations.
If you encounter issues with simulation stability, e.g. warnings about NaN values, consider fine-tuning the hyperparameters, including control gains and regularization terms, to achieve more stable and reliable simulations.
== MuJoCo Menagerie ==
The MuJoCo Menagerie is a collection of pre-built models and simulation environments showcasing the capabilities of the MuJoCo physics engine. This resource is valuable for researchers, engineers, and enthusiasts exploring robotics, biomechanics, and machine learning applications.
=== Getting Started ===
Download models and example files from the official MuJoCo website or repository. Detailed documentation and setup instructions accompany each model. Users are encouraged to experiment, modify XML files, and share their findings with the community.
By leveraging the MuJoCo Menagerie, researchers and engineers can accelerate their work, explore new possibilities, and contribute to the MuJoCo ecosystem.
1f3e9cda63f1f89bc4c7818d37167c45a87a1d56
1074
1073
2024-05-19T16:25:21Z
Vrtnis
21
/* Add code samples for Menagerie*/
wikitext
text/x-wiki
== MuJoCo ==
MuJoCo, short for Multi-Joint dynamics with Contact, is a physics engine designed for research and development in robotics, machine learning, and biomechanics. This open-source software provides accurate and efficient simulation of complex physical systems, making it highly regarded among researchers and engineers.
=== History ===
MuJoCo was developed at the University of Washington and released in 2012. Initially created to aid in the study of motor control in humans and robots, it quickly gained popularity for its ability to simulate complex physical interactions with high precision and efficiency. In 2021, DeepMind, a subsidiary of Alphabet Inc., acquired MuJoCo and made it freely available to the public.
=== Tips and Suggestions ===
When creating MuJoCo XML files, it can be helpful to start with a full robot model and then manually copy and paste sections to create single or dual-arm setups. This approach saves time and ensures consistency across different models.
Additionally, MuJoCo supports defining keyframes for specific poses directly in the XML, which can streamline the setup process for various robot configurations.
If you encounter issues with simulation stability, e.g. warnings about NaN values, consider fine-tuning the hyperparameters, including control gains and regularization terms, to achieve more stable and reliable simulations.
== MuJoCo Menagerie ==
The MuJoCo Menagerie is a collection of pre-built models and simulation environments showcasing the capabilities of the MuJoCo physics engine. This resource is valuable for researchers, engineers, and enthusiasts exploring robotics, biomechanics, and machine learning applications.
=== Getting Started ===
Download models and example files from the official MuJoCo website or repository. Detailed documentation and setup instructions accompany each model. Users are encouraged to experiment, modify XML files, and share their findings with the community.
By leveraging the MuJoCo Menagerie, researchers and engineers can accelerate their work, explore new possibilities, and contribute to the MuJoCo ecosystem.
=== Getting Started ===
To install the Menagerie, simply clone the repository in the directory of your choice:
<syntaxhighlight lang="bash">
git clone https://github.com/google-deepmind/mujoco_menagerie.git
</syntaxhighlight>
The easiest way to interactively explore a model is to load it in the simulate binary which ships with every MuJoCo distribution. Just drag and drop the <code>scene.xml</code> file into the simulate window. Alternatively, you can use the command line to launch simulate and directly pass in the path to the XML.
Outside of interactive simulation, you can load a model exactly as you would with any other XML file in MuJoCo. Here’s how you do it with the C/C++ API:
<syntaxhighlight lang="c">
#include <mujoco.h>
mjModel* model = mj_loadXML("unitree_a1/a1.xml", nullptr, nullptr, 0);
mjData* data = mj_makeData(model);
mj_step(model, data);
</syntaxhighlight>
And here’s how you do it with Python:
<syntaxhighlight lang="python">
import mujoco
model = mujoco.MjModel.from_xml_path("unitree_a1/a1.xml")
data = mujoco.MjData(model)
mujoco.mj_step(model, data)
</syntaxhighlight>
By leveraging the MuJoCo Menagerie, researchers and engineers can accelerate their work, explore new possibilities, and contribute to the MuJoCo ecosystem.
d6366316d0a4387729bb01946a9e9e9f1e506daf
1075
1074
2024-05-19T16:25:43Z
Vrtnis
21
wikitext
text/x-wiki
== MuJoCo ==
MuJoCo, short for Multi-Joint dynamics with Contact, is a physics engine designed for research and development in robotics, machine learning, and biomechanics. This open-source software provides accurate and efficient simulation of complex physical systems, making it highly regarded among researchers and engineers.
=== History ===
MuJoCo was developed at the University of Washington and released in 2012. Initially created to aid in the study of motor control in humans and robots, it quickly gained popularity for its ability to simulate complex physical interactions with high precision and efficiency. In 2021, DeepMind, a subsidiary of Alphabet Inc., acquired MuJoCo and made it freely available to the public.
=== Tips and Suggestions ===
When creating MuJoCo XML files, it can be helpful to start with a full robot model and then manually copy and paste sections to create single or dual-arm setups. This approach saves time and ensures consistency across different models.
Additionally, MuJoCo supports defining keyframes for specific poses directly in the XML, which can streamline the setup process for various robot configurations.
If you encounter issues with simulation stability, e.g. warnings about NaN values, consider fine-tuning the hyperparameters, including control gains and regularization terms, to achieve more stable and reliable simulations.
== MuJoCo Menagerie ==
The MuJoCo Menagerie is a collection of pre-built models and simulation environments showcasing the capabilities of the MuJoCo physics engine. This resource is valuable for researchers, engineers, and enthusiasts exploring robotics, biomechanics, and machine learning applications.
=== Getting Started ===
Download models and example files from the official MuJoCo website or repository. Detailed documentation and setup instructions accompany each model. Users are encouraged to experiment, modify XML files, and share their findings with the community.
By leveraging the MuJoCo Menagerie, researchers and engineers can accelerate their work, explore new possibilities, and contribute to the MuJoCo ecosystem.
=== Getting Started ===
To install the Menagerie, simply clone the repository in the directory of your choice:
<syntaxhighlight lang="bash">
git clone https://github.com/google-deepmind/mujoco_menagerie.git
</syntaxhighlight>
The easiest way to interactively explore a model is to load it in the simulate binary which ships with every MuJoCo distribution. Just drag and drop the <code>scene.xml</code> file into the simulate window. Alternatively, you can use the command line to launch simulate and directly pass in the path to the XML.
Outside of interactive simulation, you can load a model exactly as you would with any other XML file in MuJoCo. Here’s how you do it with the C/C++ API:
<syntaxhighlight lang="c">
#include <mujoco.h>
mjModel* model = mj_loadXML("unitree_a1/a1.xml", nullptr, nullptr, 0);
mjData* data = mj_makeData(model);
mj_step(model, data);
</syntaxhighlight>
And here’s how you do it with Python:
<syntaxhighlight lang="python">
import mujoco
model = mujoco.MjModel.from_xml_path("unitree_a1/a1.xml")
data = mujoco.MjData(model)
mujoco.mj_step(model, data)
</syntaxhighlight>
8138bba2632d5fd6de12b7825497066c15142fba
1077
1075
2024-05-19T16:55:42Z
Vrtnis
21
/* MuJoCo Menagerie */
wikitext
text/x-wiki
== MuJoCo ==
MuJoCo, short for Multi-Joint dynamics with Contact, is a physics engine designed for research and development in robotics, machine learning, and biomechanics. This open-source software provides accurate and efficient simulation of complex physical systems, making it highly regarded among researchers and engineers.
=== History ===
MuJoCo was developed at the University of Washington and released in 2012. Initially created to aid in the study of motor control in humans and robots, it quickly gained popularity for its ability to simulate complex physical interactions with high precision and efficiency. In 2021, DeepMind, a subsidiary of Alphabet Inc., acquired MuJoCo and made it freely available to the public.
=== Tips and Suggestions ===
When creating MuJoCo XML files, it can be helpful to start with a full robot model and then manually copy and paste sections to create single or dual-arm setups. This approach saves time and ensures consistency across different models.
Additionally, MuJoCo supports defining keyframes for specific poses directly in the XML, which can streamline the setup process for various robot configurations.
If you encounter issues with simulation stability, e.g. warnings about NaN values, consider fine-tuning the hyperparameters, including control gains and regularization terms, to achieve more stable and reliable simulations.
== MuJoCo Menagerie ==
The MuJoCo Menagerie is a collection of pre-built models and simulation environments showcasing the capabilities of the MuJoCo physics engine. This resource is valuable for researchers, engineers, and enthusiasts exploring robotics, biomechanics, and machine learning applications.
[[File:Mujoco_menagerie_github.png|alt=Source: MuJoCo Menagerie GitHub|none|300px|Source: MuJoCo Menagerie GitHub]]
=== Getting Started ===
Download models and example files from the official MuJoCo website or repository. Detailed documentation and setup instructions accompany each model. Users are encouraged to experiment, modify XML files, and share their findings with the community.
By leveraging the MuJoCo Menagerie, researchers and engineers can accelerate their work, explore new possibilities, and contribute to the MuJoCo ecosystem.
=== Getting Started ===
To install the Menagerie, simply clone the repository in the directory of your choice:
<syntaxhighlight lang="bash">
git clone https://github.com/google-deepmind/mujoco_menagerie.git
</syntaxhighlight>
The easiest way to interactively explore a model is to load it in the simulate binary which ships with every MuJoCo distribution. Just drag and drop the <code>scene.xml</code> file into the simulate window. Alternatively, you can use the command line to launch simulate and directly pass in the path to the XML.
Outside of interactive simulation, you can load a model exactly as you would with any other XML file in MuJoCo. Here’s how you do it with the C/C++ API:
<syntaxhighlight lang="c">
#include <mujoco.h>
mjModel* model = mj_loadXML("unitree_a1/a1.xml", nullptr, nullptr, 0);
mjData* data = mj_makeData(model);
mj_step(model, data);
</syntaxhighlight>
And here’s how you do it with Python:
<syntaxhighlight lang="python">
import mujoco
model = mujoco.MjModel.from_xml_path("unitree_a1/a1.xml")
data = mujoco.MjData(model)
mujoco.mj_step(model, data)
</syntaxhighlight>
10abf6c10ee23f8a35b3c80f8ee36bc592662547
1080
1077
2024-05-19T16:58:17Z
Vrtnis
21
/* Add repo link */
wikitext
text/x-wiki
== MuJoCo ==
MuJoCo, short for Multi-Joint dynamics with Contact, is a physics engine designed for research and development in robotics, machine learning, and biomechanics. This open-source software provides accurate and efficient simulation of complex physical systems, making it highly regarded among researchers and engineers.
=== History ===
MuJoCo was developed at the University of Washington and released in 2012. Initially created to aid in the study of motor control in humans and robots, it quickly gained popularity for its ability to simulate complex physical interactions with high precision and efficiency. In 2021, DeepMind, a subsidiary of Alphabet Inc., acquired MuJoCo and made it freely available to the public.
=== Tips and Suggestions ===
When creating MuJoCo XML files, it can be helpful to start with a full robot model and then manually copy and paste sections to create single or dual-arm setups. This approach saves time and ensures consistency across different models.
Additionally, MuJoCo supports defining keyframes for specific poses directly in the XML, which can streamline the setup process for various robot configurations.
If you encounter issues with simulation stability, e.g. warnings about NaN values, consider fine-tuning the hyperparameters, including control gains and regularization terms, to achieve more stable and reliable simulations.
== MuJoCo Menagerie ==
The MuJoCo Menagerie is a collection of pre-built models and simulation environments showcasing the capabilities of the MuJoCo physics engine. This resource is valuable for researchers, engineers, and enthusiasts exploring robotics, biomechanics, and machine learning applications.
[[File:Mujoco_menagerie_github.png|alt=Source: MuJoCo Menagerie GitHub|none|300px|Source: MuJoCo Menagerie GitHub]]
=== Getting Started ===
Download models and example files from the official MuJoCo website or [https://github.com/google-deepmind/mujoco_menagerie repository]. Detailed documentation and setup instructions accompany each model. Users are encouraged to experiment, modify XML files, and share their findings with the community.
By leveraging the MuJoCo Menagerie, researchers and engineers can accelerate their work, explore new possibilities, and contribute to the MuJoCo ecosystem.
=== Getting Started ===
To install the Menagerie, simply clone the repository in the directory of your choice:
<syntaxhighlight lang="bash">
git clone https://github.com/google-deepmind/mujoco_menagerie.git
</syntaxhighlight>
The easiest way to interactively explore a model is to load it in the simulate binary which ships with every MuJoCo distribution. Just drag and drop the <code>scene.xml</code> file into the simulate window. Alternatively, you can use the command line to launch simulate and directly pass in the path to the XML.
Outside of interactive simulation, you can load a model exactly as you would with any other XML file in MuJoCo. Here’s how you do it with the C/C++ API:
<syntaxhighlight lang="c">
#include <mujoco.h>
mjModel* model = mj_loadXML("unitree_a1/a1.xml", nullptr, nullptr, 0);
mjData* data = mj_makeData(model);
mj_step(model, data);
</syntaxhighlight>
And here’s how you do it with Python:
<syntaxhighlight lang="python">
import mujoco
model = mujoco.MjModel.from_xml_path("unitree_a1/a1.xml")
data = mujoco.MjData(model)
mujoco.mj_step(model, data)
</syntaxhighlight>
57bf886d7b8220229c1632c921786f5e8516f1fe
1081
1080
2024-05-19T16:58:40Z
Vrtnis
21
/* Getting Started */
wikitext
text/x-wiki
== MuJoCo ==
MuJoCo, short for Multi-Joint dynamics with Contact, is a physics engine designed for research and development in robotics, machine learning, and biomechanics. This open-source software provides accurate and efficient simulation of complex physical systems, making it highly regarded among researchers and engineers.
=== History ===
MuJoCo was developed at the University of Washington and released in 2012. Initially created to aid in the study of motor control in humans and robots, it quickly gained popularity for its ability to simulate complex physical interactions with high precision and efficiency. In 2021, DeepMind, a subsidiary of Alphabet Inc., acquired MuJoCo and made it freely available to the public.
=== Tips and Suggestions ===
When creating MuJoCo XML files, it can be helpful to start with a full robot model and then manually copy and paste sections to create single or dual-arm setups. This approach saves time and ensures consistency across different models.
Additionally, MuJoCo supports defining keyframes for specific poses directly in the XML, which can streamline the setup process for various robot configurations.
If you encounter issues with simulation stability, e.g. warnings about NaN values, consider fine-tuning the hyperparameters, including control gains and regularization terms, to achieve more stable and reliable simulations.
== MuJoCo Menagerie ==
The MuJoCo Menagerie is a collection of pre-built models and simulation environments showcasing the capabilities of the MuJoCo physics engine. This resource is valuable for researchers, engineers, and enthusiasts exploring robotics, biomechanics, and machine learning applications.
[[File:Mujoco_menagerie_github.png|alt=Source: MuJoCo Menagerie GitHub|none|300px|Source: MuJoCo Menagerie GitHub]]
=== Getting Started ===
Download models and example files from the official MuJoCo website or [https://github.com/google-deepmind/mujoco_menagerie repository]. Detailed documentation and setup instructions accompany each model. Users are encouraged to experiment, modify XML files, and share their findings with the community.
By leveraging the MuJoCo Menagerie, researchers and engineers can accelerate their work, explore new possibilities, and contribute to the MuJoCo ecosystem.
To install the Menagerie, simply clone the repository in the directory of your choice:
<syntaxhighlight lang="bash">
git clone https://github.com/google-deepmind/mujoco_menagerie.git
</syntaxhighlight>
The easiest way to interactively explore a model is to load it in the simulate binary which ships with every MuJoCo distribution. Just drag and drop the <code>scene.xml</code> file into the simulate window. Alternatively, you can use the command line to launch simulate and directly pass in the path to the XML.
Outside of interactive simulation, you can load a model exactly as you would with any other XML file in MuJoCo. Here’s how you do it with the C/C++ API:
<syntaxhighlight lang="c">
#include <mujoco.h>
mjModel* model = mj_loadXML("unitree_a1/a1.xml", nullptr, nullptr, 0);
mjData* data = mj_makeData(model);
mj_step(model, data);
</syntaxhighlight>
And here’s how you do it with Python:
<syntaxhighlight lang="python">
import mujoco
model = mujoco.MjModel.from_xml_path("unitree_a1/a1.xml")
data = mujoco.MjData(model)
mujoco.mj_step(model, data)
</syntaxhighlight>
d72e6c391b4620e4344bdc91690b9d403f687890
1082
1081
2024-05-19T16:59:27Z
Vrtnis
21
wikitext
text/x-wiki
This is incomplete and a work in progress; you can help by expanding it!
== MuJoCo ==
MuJoCo, short for Multi-Joint dynamics with Contact, is a physics engine designed for research and development in robotics, machine learning, and biomechanics. This open-source software provides accurate and efficient simulation of complex physical systems, making it highly regarded among researchers and engineers.
=== History ===
MuJoCo was developed at the University of Washington and released in 2012. Initially created to aid in the study of motor control in humans and robots, it quickly gained popularity for its ability to simulate complex physical interactions with high precision and efficiency. In 2021, DeepMind, a subsidiary of Alphabet Inc., acquired MuJoCo and made it freely available to the public.
=== Tips and Suggestions ===
When creating MuJoCo XML files, it can be helpful to start with a full robot model and then manually copy and paste sections to create single or dual-arm setups. This approach saves time and ensures consistency across different models.
Additionally, MuJoCo supports defining keyframes for specific poses directly in the XML, which can streamline the setup process for various robot configurations.
If you encounter issues with simulation stability, e.g. warnings about NaN values, consider fine-tuning the hyperparameters, including control gains and regularization terms, to achieve more stable and reliable simulations.
== MuJoCo Menagerie ==
The MuJoCo Menagerie is a collection of pre-built models and simulation environments showcasing the capabilities of the MuJoCo physics engine. This resource is valuable for researchers, engineers, and enthusiasts exploring robotics, biomechanics, and machine learning applications.
[[File:Mujoco_menagerie_github.png|alt=Source: MuJoCo Menagerie GitHub|none|300px|Source: MuJoCo Menagerie GitHub]]
=== Getting Started ===
Download models and example files from the official MuJoCo website or [https://github.com/google-deepmind/mujoco_menagerie repository]. Detailed documentation and setup instructions accompany each model. Users are encouraged to experiment, modify XML files, and share their findings with the community.
By leveraging the MuJoCo Menagerie, researchers and engineers can accelerate their work, explore new possibilities, and contribute to the MuJoCo ecosystem.
To install the Menagerie, simply clone the repository in the directory of your choice:
<syntaxhighlight lang="bash">
git clone https://github.com/google-deepmind/mujoco_menagerie.git
</syntaxhighlight>
The easiest way to interactively explore a model is to load it in the simulate binary which ships with every MuJoCo distribution. Just drag and drop the <code>scene.xml</code> file into the simulate window. Alternatively, you can use the command line to launch simulate and directly pass in the path to the XML.
Outside of interactive simulation, you can load a model exactly as you would with any other XML file in MuJoCo. Here’s how you do it with the C/C++ API:
<syntaxhighlight lang="c">
#include <mujoco.h>
mjModel* model = mj_loadXML("unitree_a1/a1.xml", nullptr, nullptr, 0);
mjData* data = mj_makeData(model);
mj_step(model, data);
</syntaxhighlight>
And here’s how you do it with Python:
<syntaxhighlight lang="python">
import mujoco
model = mujoco.MjModel.from_xml_path("unitree_a1/a1.xml")
data = mujoco.MjData(model)
mujoco.mj_step(model, data)
</syntaxhighlight>
f23193e77b902e550e1129ee80b67742e10d85a4
1083
1082
2024-05-19T17:01:22Z
Vrtnis
21
wikitext
text/x-wiki
This is incomplete and a work in progress; you can help by expanding it!
== MuJoCo ==
MuJoCo, short for Multi-Joint dynamics with Contact, is a physics engine designed for research and development in robotics, machine learning, and biomechanics. This [https://github.com/google-deepmind/mujoco open-source software] provides accurate and efficient simulation of complex physical systems, making it highly regarded among researchers and engineers.
=== History ===
MuJoCo was developed at the University of Washington and released in 2012. Initially created to aid in the study of motor control in humans and robots, it quickly gained popularity for its ability to simulate complex physical interactions with high precision and efficiency. In 2021, DeepMind, a subsidiary of Alphabet Inc., acquired MuJoCo and made it freely available to the public.
=== Tips and Suggestions ===
When creating MuJoCo XML files, it can be helpful to start with a full robot model and then manually copy and paste sections to create single or dual-arm setups. This approach saves time and ensures consistency across different models.
Additionally, MuJoCo supports defining keyframes for specific poses directly in the XML, which can streamline the setup process for various robot configurations.
If you encounter issues with simulation stability, e.g. warnings about NaN values, consider fine-tuning the hyperparameters, including control gains and regularization terms, to achieve more stable and reliable simulations.
== MuJoCo Menagerie ==
The MuJoCo Menagerie is a collection of pre-built models and simulation environments showcasing the capabilities of the MuJoCo physics engine. This resource is valuable for researchers, engineers, and enthusiasts exploring robotics, biomechanics, and machine learning applications.
[[File:Mujoco_menagerie_github.png|alt=Source: MuJoCo Menagerie GitHub|none|300px|Source: MuJoCo Menagerie GitHub]]
=== Getting Started ===
Download models and example files from the official MuJoCo website or [https://github.com/google-deepmind/mujoco_menagerie repository]. Detailed documentation and setup instructions accompany each model. Users are encouraged to experiment, modify XML files, and share their findings with the community.
By leveraging the MuJoCo Menagerie, researchers and engineers can accelerate their work, explore new possibilities, and contribute to the MuJoCo ecosystem.
To install the Menagerie, simply clone the repository in the directory of your choice:
<syntaxhighlight lang="bash">
git clone https://github.com/google-deepmind/mujoco_menagerie.git
</syntaxhighlight>
The easiest way to interactively explore a model is to load it in the simulate binary which ships with every MuJoCo distribution. Just drag and drop the <code>scene.xml</code> file into the simulate window. Alternatively, you can use the command line to launch simulate and directly pass in the path to the XML.
Outside of interactive simulation, you can load a model exactly as you would with any other XML file in MuJoCo. Here’s how you do it with the C/C++ API:
<syntaxhighlight lang="c">
#include <mujoco.h>
mjModel* model = mj_loadXML("unitree_a1/a1.xml", nullptr, nullptr, 0);
mjData* data = mj_makeData(model);
mj_step(model, data);
</syntaxhighlight>
And here’s how you do it with Python:
<syntaxhighlight lang="python">
import mujoco
model = mujoco.MjModel.from_xml_path("unitree_a1/a1.xml")
data = mujoco.MjData(model)
mujoco.mj_step(model, data)
</syntaxhighlight>
6312f98ee95aa96ebef1e6d67ebb5884e86b6bfa
1096
1083
2024-05-19T18:52:04Z
Vrtnis
21
/*Add WASM Ref*/
wikitext
text/x-wiki
This is incomplete and a work in progress; you can help by expanding it!
== MuJoCo ==
MuJoCo, short for Multi-Joint dynamics with Contact, is a physics engine designed for research and development in robotics, machine learning, and biomechanics. This [https://github.com/google-deepmind/mujoco open-source software] provides accurate and efficient simulation of complex physical systems, making it highly regarded among researchers and engineers.
=== History ===
MuJoCo was developed at the University of Washington and released in 2012. Initially created to aid in the study of motor control in humans and robots, it quickly gained popularity for its ability to simulate complex physical interactions with high precision and efficiency. In 2021, DeepMind, a subsidiary of Alphabet Inc., acquired MuJoCo and made it freely available to the public. There is also a community built MuJoCo WASM port that loads and runs MuJoCo 2.3.1 Models using JavaScript and WebAssembly<ref>https://github.com/zalo/mujoco_wasm</ref>.
=== Tips and Suggestions ===
When creating MuJoCo XML files, it can be helpful to start with a full robot model and then manually copy and paste sections to create single or dual-arm setups. This approach saves time and ensures consistency across different models.
Additionally, MuJoCo supports defining keyframes for specific poses directly in the XML, which can streamline the setup process for various robot configurations.
If you encounter issues with simulation stability, e.g. warnings about NaN values, consider fine-tuning the hyperparameters, including control gains and regularization terms, to achieve more stable and reliable simulations.
== MuJoCo Menagerie ==
The MuJoCo Menagerie is a collection of pre-built models and simulation environments showcasing the capabilities of the MuJoCo physics engine. This resource is valuable for researchers, engineers, and enthusiasts exploring robotics, biomechanics, and machine learning applications.
[[File:Mujoco_menagerie_github.png|alt=Source: MuJoCo Menagerie GitHub|none|300px|Source: MuJoCo Menagerie GitHub]]
=== Getting Started ===
Download models and example files from the official MuJoCo website or [https://github.com/google-deepmind/mujoco_menagerie repository]. Detailed documentation and setup instructions accompany each model. Users are encouraged to experiment, modify XML files, and share their findings with the community.
By leveraging the MuJoCo Menagerie, researchers and engineers can accelerate their work, explore new possibilities, and contribute to the MuJoCo ecosystem.
To install the Menagerie, simply clone the repository in the directory of your choice:
<syntaxhighlight lang="bash">
git clone https://github.com/google-deepmind/mujoco_menagerie.git
</syntaxhighlight>
The easiest way to interactively explore a model is to load it in the simulate binary which ships with every MuJoCo distribution. Just drag and drop the <code>scene.xml</code> file into the simulate window. Alternatively, you can use the command line to launch simulate and directly pass in the path to the XML.
Outside of interactive simulation, you can load a model exactly as you would with any other XML file in MuJoCo. Here’s how you do it with the C/C++ API:
<syntaxhighlight lang="c">
#include <mujoco.h>
mjModel* model = mj_loadXML("unitree_a1/a1.xml", nullptr, nullptr, 0);
mjData* data = mj_makeData(model);
mj_step(model, data);
</syntaxhighlight>
And here’s how you do it with Python:
<syntaxhighlight lang="python">
import mujoco
model = mujoco.MjModel.from_xml_path("unitree_a1/a1.xml")
data = mujoco.MjData(model)
mujoco.mj_step(model, data)
</syntaxhighlight>
3b948aa1215c69ee75bc806b08889cb964d5e56d
Getting Started with Humanoid Robots
0
193
1062
1057
2024-05-16T20:23:55Z
Vrtnis
21
/* Add wikilink to MuJoCo (MJX) */
wikitext
text/x-wiki
This is a build guide for getting started experimenting with your own humanoid robot.
This is incomplete and a work in progress; you can help by expanding it!
== Building Your Humanoid Robot ==
In humanoid robotics, choosing the right components, for example, actuators and gearboxes is crucial. Folks can use planetary and cycloidal gear actuators for their precision and strength, along with Series Elastic and Quasi-Direct Drive actuators for smoother, more natural movements. Advanced designs like the [https://humanoids.wiki/w/MIT_Cheetah MIT Cheetah] actuator push the boundaries with fast, agile movements. Projects like the SPIN initiative are also key, as they make high-quality actuator technology more accessible, helping the field evolve and improve.
== Actuators and Gearboxes ==
=== Actuator Types and Design Inspirations ===
==== Planetary and Cycloidal Gear Actuators ====
These actuators remain popular in the robotics community due to their high torque output and compact form factors. Planetary gears are favored for their efficiency and ability to handle high power densities, crucial for humanoid robotics. Cycloidal gears offer superior load-bearing capabilities and minimal backlash, ideal for precise motion control.
MyActuator (just one option) offers a variety of planetary actuators. These actuators, while still relatively pricey, offer robust performance and are integral to the efficient functioning of the builds. Some models are:
RMD X4: A lightweight and compact actuator that provides precise control and high efficiency.
RMD X6: Offers a good balance of torque and speed, suitable for medium-sized applications.
RMD X8: Features a more powerful motor and higher torque capacity, making it ideal for more demanding tasks.
RMD X10: The most powerful actuator used, designed for high torque applications with excellent control features.
==== Series Elastic and Quasi-Direct Drive Actuators ====
Series Elastic Actuators (SEAs) are used in applications requiring safe and compliant human-robot interaction. They incorporate elastic elements, allowing for energy absorption and safer interactions. Quasi-Direct Drive Actuators provide a balance between the control fidelity of direct drives and the mechanical simplicity of geared systems, promoting natural and responsive movements.
Some things to consider:
The springs in SEAs are where the magic happens. Choosing the right stiffness is a balancing act between getting precise torque control and avoiding sluggish responses. Since the spring is constantly flexing, you need sensors tuned to give accurate torque measurements. Do it regularly, and you'll keep those movements smooth and predictable. You want finely-tuned control loops to make SEAs shine. A high-frequency loop can make your robot more agile in handling external forces. PID controllers are a solid starting point, or you can try out some advanced strategies.
Friction can really impact your torque control, especially in gearboxes and linkages. Using low-friction components and proper lubrication will help keep everything moving smoothly. Make sure your spring is positioned directly between the actuator and the joint. If not, your robot won’t get the full benefit of force sensing, and that precision will be lost.If your robot is doing a lot of high-impact activities, the springs can wear out. Keep an eye on them to avoid breakdowns when you least expect them. SEAs thrive on real-time feedback. Ensure your software can handle data quickly, maybe using a real-time operating system or optimized signal processing.
==== MIT Cheetah Actuator ====
The MIT Cheetah actuator design is a notable example that several community members are considering emulating. Its design optimizes for rapid, dynamic movements and could potentially set a standard for agile robotic locomotion. Its designed to pack a lot of power into a lightweight, compact system. It offers excellent torque and control without being bulky, making it perfect for mobile robots that need to be quick on their feet. Also, it's energy-efficient and provides a high torque-to-weight ratio, so robots can move fast and precisely, which is essential for those tricky, agile movements.
One of the coolest things about this actuator is how it manages to minimize backlash, giving you smooth, accurate control over the robot's motion. Its integrated design also means the motor and controller work together seamlessly, which keeps the system streamlined. Plus, the advanced control algorithms make it easy for the actuator to handle dynamic motions, whether it's fast acceleration or sharp turns. If you're building a robot that needs to move like a sprinter while staying super nimble, the MIT Cheetah actuator is an awesome choice.
Here is the [https://fab.cba.mit.edu/classes/865.18/motion/papers/mit-cheetah-actuator.pdf MIT research paper] if you are interested in a deeper dive.
=== Open-Source Development and Collaboration ===
==== SPIN: A Revolutionary Servo Project ====
The [https://github.com/atopile/spin-servo-drive SPIN Project] by Atopile is developing an open-source hardware project aimed at making it easier and more cost-effective to use BLDC servo motors. This project is particularly notable for its potential to democratize high-quality actuator technology, making it accessible for a broader range of developers and hobbyists.
=== Community Insights and Future Directions ===
==== Comprehensive Actuator Comparisons ====
The humanoid robotics community actively discusses the need for a universal platform to compare and contrast the cost and performance of commercially available actuators. This could involve developing a comprehensive database or chart detailing each actuator's cost per Newton-meter, control schemes, and RPM, providing a valuable resource for both newcomers and experienced developers.
Here is a [https://jakeread.pages.cba.mit.edu/actuators/ scatter plot] of actuators hosted at MIT
==== Custom Actuator Developments ====
[https://irisdynamics.com/products/orca-series Iris Dynamics electric linear actuators] suggest they can match the capabilities of human muscles, making them particularly interesting for humanoid applications.
== Assembly Tips ==
===== Community Forums =====
Leverage discussions from platforms like RobotForum to avoid common pitfalls. Whether it's selecting the right planetary gearbox or figuring out the optimal motor for each joint, community insights can be invaluable.
=== Programming and Control ===
==== ROS (Robot Operating System) ====
Start with ROS for an extensive suite of tools for programming and control, suitable for managing complex robotic functions. ROS serves as a valuable abstraction for comprehending the diverse components within a robotics system. However, its substantial nature may occasionally present challenges for modification for some users. The learning curve could be steep, and due to its emphasis on leveraging third-party packages, addressing issues can require additional effort and expertise.
==== Custom Software Solutions ====
Explore custom algorithms for adaptive control or reactive behaviors. Integrate advanced sensor feedback loops for real-time adjustments.
== Experimenting with Your Humanoid Robot ==
=== Testing and Iteration ===
==== Virtual Testing Before Physical Implementation in Humanoid Robotics ====
NVIDIA's Isaac Sim and Isaac Gym, alongside other simulators, form a crucial foundation for designing and testing humanoid robots virtually. Insights and suggestions from experts working with these tools are captured below.
===== Isaac-Based Simulators and Frameworks =====
====== Isaac Sim ======
IDE Experience: Provides a comprehensive, if complex, simulation environment.
PhysX Engine: Utilizes the PhysX engine to handle both contact and joint constraints, though Isaac Sim currently does not fully expose closed-loop constraint capabilities.
Joint Constraints: Supports maximal coordinate systems, which include joint constraints that are common in articulated robots.
Virtual Sensors: Allows the simulation of perception with virtual cameras and LiDARs, providing policy training inputs rendered with NVIDIA RTX.
====== Isaac Gym ======
[[Reinforcement Learning]] Training: Enables parallel environments for fast policy training.
PHC Approach: Integrates AMP for real-time pose control, making it easier to teach new skills.
Gait Optimization Issues: While 17-DOF walking tasks work well, gait reward optimization needs refinement for more complex tasks.
Closed-Loop Articulation: Belt-driven mechanisms provide a viable alternative for certain closed-loop designs.
====== Orbit Framework ======
Unified Training Framework: Integrates Isaac Sim and Isaac Gym for modular and consistent policy validation.
OmniIsaacGymEnvs: Offers predefined tasks like walking and standing.
====== Omniverse Isaac Gym ======
Shift in Development: NVIDIA is consolidating Isaac Gym into Isaac Sim through Omniverse, providing the best of both worlds.
Challenges: Demands powerful NVIDIA GPUs, potentially limiting some development workflows.
===== External Tools and Comparative Platforms =====
====== Legged Gym ======
A repository showcasing the state-of-the-art in legged robot training.
====== MuJoCo (MJX)======
Offers a lightweight open-source alternative, supporting maximal coordinate constraints and easier to work with. The MuJoCo_MPC repository, created by Google DeepMind, is a toolset that combines Model Predictive Control (MPC) with the [[MuJoCo]] physics engine to create real-time behavior synthesis. With the advanced MJX extension, which uses GPU acceleration, it can simulate multiple environments in parallel. One approach is to try to replicate the techniques detailed in the AMP (Adversarial Motion Priors) paper to achieve agile humanoid behavior. For example, implementing a humanoid get-up sequence, which matches what was described in the AMP research.
There’s been collaboration between different projects, like Stompy, to get humanoid simulations up and running. You could try converting Gymnasium to handle the URDF (Universal Robot Description Format) file format. Although converting to MJCF (MuJoCo's XML-based format) may present some challenges, we can still get it to work and refine the motor and actuator setup.
Although [[MuJoCo]] can be slower in single-environment simulations, the MJX extension and its parallel processing potential make it a solid competitor. Compared to enviroments, like NVIDIA's Isaac Gym, MuJoCo might stand out for its extensibility and rapid development. One goal could be to try recreate the walking, running, and getting-up behaviors described in the AMP paper and use them as a foundation for training robust humanoid movements in simulation.
====== VSim ======
Claims to be 10x faster than other simulators.
====== ManiSkill/Sapien ======
Provides tactile simulation and visual-based policy training that is up to 100x faster than Isaac Sim.
===== Best Practices for Virtual Testing =====
- Incremental Complexity: Start simple and build up to more complex environments and tasks.
- Cross-Simulator Validation: Validate robot models across simulators (e.g., Isaac and MuJoCo) to ensure robustness.
- Incorporate Real-World Fidelity: Include sensor noise and imperfections for better policy generalization.
- Optimize Resources:
Use Azure's A100 GPUs for Isaac training.
Capture real-world data to refine virtual training.
By understanding the nuances and strengths of each simulator, developers can refine their humanoid robots effectively. Using Isaac Sim, Isaac Gym, and complementary tools, a robust simulation approach ensures smooth virtual-to-physical transferability while reducing development time and costs.
More resources are available at [https://humanoids.wiki/w/Learning_algorithms Learning Algorithms]
== Real-World Testing ==
Gradually transition to physical testing, beginning with simple tasks and moving to more complex interactions.
=== Data Collection and Analysis ===
==== Camera Systems ====
Consider integrating advanced camera systems like those from e-con Systems or Arducam for visual feedback and navigation. Discuss camera choices, considering factors like latency, resolution, and integration ease with your main control system.
== Advanced Customization and Community Engagement ==
=== Open Source Projects ===
Contribute to an open-source project. For instance, platforms like GitHub host numerous projects where you can collaborate with others such as K-Scale https://github.com/kscalelabs. Check out [https://humanoids.wiki/w/Stompy Stompy!]
=== Modular Design ===
Engage in modular robot design to easily swap components or aesthetics. This approach allows for extensive customization and upgrades over time.
== Safety and Continuous Learning ==
=== Safety Protocols ===
Always implement robust safety measures when testing and demonstrating your robot.
8f43fe92e78ff148255f66e0395a2dfc29e5b380
Humanoid Robots Wiki:About
4
17
1064
1036
2024-05-16T20:35:36Z
Vrtnis
21
/*Cleaned up offtopic text*/
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
10f836d98c345d22b9438c434524774f3df8d0d8
Humanoid Robots Wiki talk:About
5
230
1065
984
2024-05-16T20:37:32Z
Vrtnis
21
/* Removed offtopic talk text */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
10f836d98c345d22b9438c434524774f3df8d0d8
File:Kscale db9 can bus convention.jpg
6
248
1069
2024-05-17T07:33:58Z
Ben
2
wikitext
text/x-wiki
Kscale db9 can bus convention
16e4691a83af1b3693cef81638b2e16b6b7f9379
File:Kscale phoenix can bus convention.jpg
6
249
1070
2024-05-17T07:34:26Z
Ben
2
wikitext
text/x-wiki
Kscale phoenix can bus convention
3ffa0fc846f492ced9a6395794a3415bc9b4955d
Stompy
0
2
1071
981
2024-05-17T07:35:06Z
Ben
2
/* Wiring and Connectors */
wikitext
text/x-wiki
[[File:Stompy.jpg|right|300px|thumb]]
{{infobox robot
| name = Stompy
| organization = [[K-Scale Labs]]
| cost = USD 10,000
}}
Stompy is an open-source humanoid robot developed by [[K-Scale Labs]]. Here are some relevant links:
* [[Stompy To-Do List]]
* [[Stompy Build Guide]]
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
=== Conventions ===
The images below show our pin convention for the CAN bus when using various connectors.
<gallery>
Kscale db9 can bus convention.jpg
Kscale phoenix can bus convention.jpg
</gallery>
= Simulation =
For the latest simulation artifacts, see [https://kscale.dev/ the website].
= Artwork =
Here's some art of Stompy!
<gallery>
Stompy 1.png
Stompy 2.png
Stompy 3.png
Stompy 4.png
</gallery>
[[Category:Robots]]
[[Category:Open Source]]
[[Category:K-Scale]]
c96e0188790ef7a2dade8c0ea1c8e1a83511c597
File:Mujoco menagerie github.png
6
250
1076
2024-05-19T16:43:43Z
Vrtnis
21
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Stompy In MuJoCo
0
251
1084
2024-05-19T17:40:33Z
Vrtnis
21
Created page with "=== Importing the Stompy Robot XML === To import the Stompy robot XML into MuJoCo, follow these steps: 1. '''Download the Binary''' : First, download the MuJoCo binary from..."
wikitext
text/x-wiki
=== Importing the Stompy Robot XML ===
To import the Stompy robot XML into MuJoCo, follow these steps:
1. '''Download the Binary''' : First, download the MuJoCo binary from the [https://github.com/google-deepmind/mujoco/releases/ GitHub repository]
2. '''Download Robot Files''': Obtain the Stompy robot MuJo files from
[https://media.kscale.dev/stompy/latest_mjcf.tar.gz here].
3. '''Drag and Drop''': Open the MuJoCo simulate binary. Simply drag and drop the `robot.xml` file from the downloaded robot files into the simulate window.
e8948b51a6e0ff812f0ad44e4527e14fc8847070
1086
1084
2024-05-19T17:43:34Z
Vrtnis
21
wikitext
text/x-wiki
=== Importing the Stompy Robot XML ===
To import the Stompy robot XML into MuJoCo, follow these steps:
1. '''Download the Binary''' : First, download the MuJoCo binary from the [https://github.com/google-deepmind/mujoco/releases/ GitHub repository]
2. '''Download Robot Files''': Obtain the Stompy robot MuJo files from
[https://media.kscale.dev/stompy/latest_mjcf.tar.gz here].
3. '''Drag and Drop''': Open the MuJoCo simulate binary. Simply drag and drop the `robot.xml` file from the downloaded robot files into the simulate window.
[[File:Mujucoreleases1.png|300px|thumb|none|MuJoCo releases]]
5abe433883e7787ae6f2f730f99db780df34eeb6
1088
1086
2024-05-19T17:49:37Z
Vrtnis
21
wikitext
text/x-wiki
=== Importing the Stompy Robot XML ===
To import the Stompy robot XML into MuJoCo, follow these steps:
1. '''Download the Binary''' : First, download the MuJoCo binary from the [https://github.com/google-deepmind/mujoco/releases/ GitHub repository]
[[File:Mujucoreleases1.png|600px|thumb|none|MuJoCo releases]]
2. '''Download Robot Files''': Obtain the Stompy robot MuJo files from
[https://media.kscale.dev/stompy/latest_mjcf.tar.gz here].
[[File:Storekscalemjcf.png|600px|thumb|none| K-Scale Downloads]]
3. '''Drag and Drop''': Open the MuJoCo simulate binary. Simply drag and drop the `robot.xml` file from the downloaded robot files into the simulate window.
18a4ed35c42631af5b34933505f494d47f7d478f
1091
1088
2024-05-19T18:00:45Z
Vrtnis
21
wikitext
text/x-wiki
=== Importing the Stompy Robot XML ===
To import the Stompy robot XML into MuJoCo, follow these steps:
1. '''Download the Binary''' : First, download the MuJoCo binary from the [https://github.com/google-deepmind/mujoco/releases/ GitHub repository]
[[File:Mujucoreleases1.png|600px|thumb|none|MuJoCo releases]]
2. '''Download Robot Files''': Obtain the Stompy robot MuJoCo files from
[https://media.kscale.dev/stompy/latest_mjcf.tar.gz here].
[[File:Storekscalemjcf.png|600px|thumb|none| K-Scale Downloads]]
3. '''Drag and Drop''': Open the MuJoCo simulate binary. Simply drag and drop the `robot.xml` file from the downloaded robot files into the simulate window.
[[File:Run simulate.png|600px|frame|none|Run MuJoCo Simulate]]
[[File:Robotxml.png|600px|thumb|none|Downloaded Files]]
d37c9e11f91e877f7154b61c09caebb3ce49aa8d
1093
1091
2024-05-19T18:05:39Z
Vrtnis
21
/*Add MuJoCo screenshots*/
wikitext
text/x-wiki
=== Importing the Stompy Robot XML ===
To import the Stompy robot XML into MuJoCo, follow these steps:
1. '''Download the Binary''' : First, download the MuJoCo binary from the [https://github.com/google-deepmind/mujoco/releases/ GitHub repository]
[[File:Mujucoreleases1.png|600px|thumb|none|MuJoCo releases]]
2. '''Download Robot Files''': Obtain the Stompy robot MuJoCo files from
[https://media.kscale.dev/stompy/latest_mjcf.tar.gz here].
[[File:Storekscalemjcf.png|600px|thumb|none| K-Scale Downloads]]
3. '''Drag and Drop''': Open the MuJoCo simulate binary. Simply drag and drop the `robot.xml` file from the downloaded robot files into the simulate window.
[[File:Run simulate.png|600px|frame|none|Run MuJoCo Simulate]]
[[File:Robotxml.png|600px|thumb|none|Downloaded Files]]
[[File:Stompy drag drop mujoco.png|800px|thumb|none|Drop into MuJoCo Simulate]]
fb779379b0f74d1c77f2f2f259ff42159aaa7e11
1094
1093
2024-05-19T18:06:52Z
Vrtnis
21
/*Added Guides Tag*/
wikitext
text/x-wiki
=== Importing the Stompy Robot XML ===
To import the Stompy robot XML into MuJoCo, follow these steps:
1. '''Download the Binary''' : First, download the MuJoCo binary from the [https://github.com/google-deepmind/mujoco/releases/ GitHub repository]
[[File:Mujucoreleases1.png|600px|thumb|none|MuJoCo releases]]
2. '''Download Robot Files''': Obtain the Stompy robot MuJoCo files from
[https://media.kscale.dev/stompy/latest_mjcf.tar.gz here].
[[File:Storekscalemjcf.png|600px|thumb|none| K-Scale Downloads]]
3. '''Drag and Drop''': Open the MuJoCo simulate binary. Simply drag and drop the `robot.xml` file from the downloaded robot files into the simulate window.
[[File:Run simulate.png|600px|frame|none|Run MuJoCo Simulate]]
[[File:Robotxml.png|600px|thumb|none|Downloaded Files]]
[[File:Stompy drag drop mujoco.png|800px|thumb|none|Drop into MuJoCo Simulate]]
[[Category: Guides]]
c5f2e2626cb124bf5bb84ae4f4a21581f96b50a9
1095
1094
2024-05-19T18:14:55Z
Vrtnis
21
wikitext
text/x-wiki
=== Importing the Stompy Robot XML ===
To import Stompy's XML into MuJoCo, follow these steps:
1. '''Download the Binary''' : First, download the MuJoCo binary from the [https://github.com/google-deepmind/mujoco/releases/ GitHub repository]
[[File:Mujucoreleases1.png|600px|thumb|none|MuJoCo releases]]
2. '''Download Robot Files''': Obtain the Stompy robot MuJoCo files from
[https://media.kscale.dev/stompy/latest_mjcf.tar.gz here].
[[File:Storekscalemjcf.png|600px|thumb|none| K-Scale Downloads]]
3. '''Drag and Drop''': Open the MuJoCo simulate binary. Simply drag and drop the `robot.xml` file from the downloaded robot files into the simulate window.
[[File:Run simulate.png|600px|frame|none|Run MuJoCo Simulate]]
[[File:Robotxml.png|600px|thumb|none|Downloaded Files]]
[[File:Stompy drag drop mujoco.png|800px|thumb|none|Drop into MuJoCo Simulate]]
[[Category: Guides]]
5df30072c630cdb52b23658bccf0ee20c03966be
File:Mujucoreleases1.png
6
252
1085
2024-05-19T17:42:44Z
Vrtnis
21
wikitext
text/x-wiki
Screenshot of MuJoCo releases
5c8406c0bc12550eb294e17ee0bc38ed53955515
File:Storekscalemjcf.png
6
253
1087
2024-05-19T17:48:05Z
Vrtnis
21
wikitext
text/x-wiki
KScale MJCF Download
37caa6cccc01c52c6e4026f53ce363bc8b50facd
File:Robotxml.png
6
254
1089
2024-05-19T17:51:37Z
Vrtnis
21
wikitext
text/x-wiki
Robot XML Download
75f74e78599c7066712bfedf574b24334e8f8b81
File:Run simulate.png
6
255
1090
2024-05-19T17:59:48Z
Vrtnis
21
wikitext
text/x-wiki
Run ./simulate
44535584d6cb4bfd412c2741e20745363df98de6
File:Stompy drag drop mujoco.png
6
256
1092
2024-05-19T18:04:07Z
Vrtnis
21
wikitext
text/x-wiki
Drag and Drop Into MuJoCo Simulate
c0166c795dc626e6efe1ae8ff77fa78a4e78a8d9
MuJoCo WASM
0
257
1097
2024-05-19T18:58:40Z
Vrtnis
21
/*Setup guide for MuJoCo WASM*/
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
1098
1097
2024-05-19T19:01:32Z
Vrtnis
21
/*Add install process*/
wikitext
text/x-wiki
== Building ==
=== 1. Install emscripten ===
First, you need to install emscripten, which is a compiler toolchain for WebAssembly.
=== 2. Build the mujoco_wasm Binary ===
Next, you'll build the MuJoCo WebAssembly binary.
==== On Linux ====
<code>
mkdir build
cd build
emcmake cmake ..
make
</code>
129397f418eac4b0516b594b002de9e882441f17
1099
1098
2024-05-19T19:15:55Z
Vrtnis
21
/*emscripten instructions*/
wikitext
text/x-wiki
== Building ==
=== 1. Install emscripten ===
First, you need to install emscripten, which is a compiler toolchain for WebAssembly.
=== 1. Get the emsdk repo ===
<code>
git clone https://github.com/emscripten-core/emsdk.git
</code>
=== 2. Enter that directory ===
<code>
cd emsdk
</code>
=== 3. Download and install the latest SDK tools ===
<code>
./emsdk install latest
</code>
=== 4. Make the "latest" SDK "active" ===
<code>
./emsdk activate latest
</code>
=== 5. Activate PATH and other environment variables ===
<code>
source ./emsdk_env.sh
</code>
These variables are set for the current terminal. If you want to make it for all terminals, you can add them to any terminal profile. Here they are:
The environment variables:
<code>
EMSDK = < path to emsdk dir >
EM_CONFIG = ~/.emscripten
EMSDK_NODE = < path to emsdk dir >/node/12.9.1_64bit/bin/node
</code>
=== 6. Now just try it! ===
<code>
emcc
</code>
=== 2. Build the mujoco_wasm Binary ===
Next, you'll build the MuJoCo WebAssembly binary.
==== On Linux ====
<code>
mkdir build
cd build
emcmake cmake ..
make
</code>
59e77c2b3b8841f7edd7ba28aa0b833ad52e20b9
1100
1099
2024-05-19T19:31:16Z
Vrtnis
21
/*Update build instructions*/
wikitext
text/x-wiki
== Building ==
=== 1. Install emscripten ===
First, you need to install emscripten, which is a compiler toolchain for WebAssembly.
==== 1.1. Get the emsdk repo ====
<code>
git clone https://github.com/emscripten-core/emsdk.git
</code>
==== 1.2. Enter that directory ====
<code>
cd emsdk
</code>
==== 1.3. Download and install the latest SDK tools ====
<code>
./emsdk install latest
</code>
==== 1.4. Make the "latest" SDK "active" ====
<code>
./emsdk activate latest
</code>
==== 1.5. Activate PATH and other environment variables ====
<code>
source ./emsdk_env.sh
</code>
These variables are set for the current terminal. If you want to make it for all terminals, you can add them to any terminal profile. Here they are:
The environment variables:
<code>
EMSDK = < path to emsdk dir >
EM_CONFIG = ~/.emscripten
EMSDK_NODE = < path to emsdk dir >/node/12.9.1_64bit/bin/node
</code>
==== 1.6. Now just try it! ====
<code>
emcc
</code>
'''Tip:''' If you get an error with "undefined symbol: saveSetjmp/testSetjmp" at the build step, revert to:
<code>
./emsdk install 3.1.56 && ./emsdk activate 3.1.56 && source ./emsdk_env.sh
</code>
=== 2. Build the mujoco_wasm Binary ===
Next, you'll build the MuJoCo WebAssembly binary.
==== On Linux ====
<code>
mkdir build
cd build
emcmake cmake ..
make
</code>
15611c14f8d5a5327dddbab31d26e38663adb57c
1101
1100
2024-05-19T19:40:53Z
Vrtnis
21
wikitext
text/x-wiki
== Building ==
=== 1. Install emscripten ===
First, you need to install emscripten, which is a compiler toolchain for WebAssembly.
==== Get the emsdk repo ====
<code>
git clone https://github.com/emscripten-core/emsdk.git
</code>
==== Enter that directory ====
<code>
cd emsdk
</code>
==== Download and install the latest SDK tools ====
<code>
./emsdk install latest
</code>
==== Make the "latest" SDK "active" ====
<code>
./emsdk activate latest
</code>
==== Activate PATH and other environment variables ====
<code>
source ./emsdk_env.sh
</code>
These variables are set for the current terminal. If you want to make it for all terminals, you can add them to any terminal profile. Here they are:
The environment variables:
<code>
EMSDK = < path to emsdk dir >
EM_CONFIG = ~/.emscripten
EMSDK_NODE = < path to emsdk dir >/node/12.9.1_64bit/bin/node
</code>
==== Now just try it! ====
<code>
emcc
</code>
'''Tip:''' If you get an error with "undefined symbol: saveSetjmp/testSetjmp" at the build step, revert to:
<code>
./emsdk install 3.1.56 && ./emsdk activate 3.1.56 && source ./emsdk_env.sh
</code>
=== 2. Build the mujoco_wasm Binary ===
Next, you'll build the MuJoCo WebAssembly binary.
==== On Linux ====
<code>
mkdir build
cd build
emcmake cmake ..
make
</code>
=== 2. Build the mujoco_wasm Binary ===
Next, you'll build the MuJoCo WebAssembly binary.
==== On Linux ====
<code>
mkdir build
cd build
emcmake cmake ..
make
</code>
2570e901554e22950822332c612c9e83c849eed3
1102
1101
2024-05-19T19:46:37Z
Vrtnis
21
wikitext
text/x-wiki
== 1. Install emscripten ==
First, you need to install emscripten, which is a compiler toolchain for WebAssembly.
=== 1.1. Get the emsdk repo ===
<code>
git clone https://github.com/emscripten-core/emsdk.git
</code>
=== 1.2. Enter that directory ===
<code>
cd emsdk
</code>
=== 1.3. Download and install the latest SDK tools ===
<code>
./emsdk install latest
</code>
=== 1.4. Make the "latest" SDK "active" ===
<code>
./emsdk activate latest
</code>
=== 1.5. Activate PATH and other environment variables ===
<code>
source ./emsdk_env.sh
</code>
These variables are set for the current terminal. If you want to make it for all terminals, you can add them to any terminal profile. Here they are:
The environment variables:
<code>
EMSDK = < path to emsdk dir >
EM_CONFIG = ~/.emscripten
EMSDK_NODE = < path to emsdk dir >/node/12.9.1_64bit/bin/node
</code>
=== 1.6. Now just try it! ===
<code>
emcc
</code>
'''Tip:''' If you get an error with "undefined symbol: saveSetjmp/testSetjmp" at the build step, revert to:
<code>
./emsdk install 3.1.56 && ./emsdk activate 3.1.56 && source ./emsdk_env.sh
</code>
== 2. Build the mujoco_wasm Binary ==
Next, you'll build the MuJoCo WebAssembly binary.
=== 2.1. On Linux ===
<code>
mkdir build
cd build
emcmake cmake ..
make
</code>
8579635af0b02dbdd905ee5e8bce79085502c074
1103
1102
2024-05-19T19:47:49Z
Vrtnis
21
/*Reorg*/
wikitext
text/x-wiki
== Install emscripten ==
First, you need to install emscripten, which is a compiler toolchain for WebAssembly.
=== Get the emsdk repo ===
<code>
git clone https://github.com/emscripten-core/emsdk.git
</code>
=== Enter that directory ===
<code>
cd emsdk
</code>
=== Download and install the latest SDK tools ===
<code>
./emsdk install latest
</code>
=== Make the "latest" SDK "active" ===
<code>
./emsdk activate latest
</code>
=== Activate PATH and other environment variables ===
<code>
source ./emsdk_env.sh
</code>
These variables are set for the current terminal. If you want to make it for all terminals, you can add them to any terminal profile. Here they are:
The environment variables:
<code>
EMSDK = < path to emsdk dir >
EM_CONFIG = ~/.emscripten
EMSDK_NODE = < path to emsdk dir >/node/12.9.1_64bit/bin/node
</code>
=== Now just try it! ===
<code>
emcc
</code>
'''Tip:''' If you get an error with "undefined symbol: saveSetjmp/testSetjmp" at the build step, revert to:
<code>
./emsdk install 3.1.56 && ./emsdk activate 3.1.56 && source ./emsdk_env.sh
</code>
== Build the mujoco_wasm Binary ==
Next, you'll build the MuJoCo WebAssembly binary.
=== On Linux ===
<code>
mkdir build
cd build
emcmake cmake ..
make
</code>
3d3dacfcc88606f75022c02bfc23ea4be1af8737
1104
1103
2024-05-19T19:49:01Z
Vrtnis
21
wikitext
text/x-wiki
== Install emscripten ==
First, you need to install emscripten, which is a compiler toolchain for WebAssembly.
=== Get the emsdk repo ===
<code>
git clone https://github.com/emscripten-core/emsdk.git
</code>
=== Enter that directory ===
<code>
cd emsdk
</code>
=== Download and install the latest SDK tools ===
<code>
./emsdk install latest
</code>
=== Make the "latest" SDK "active" ===
<code>
./emsdk activate latest
</code>
=== Activate PATH and other environment variables ===
<code>
source ./emsdk_env.sh
</code>
These variables are set for the current terminal. If you want to make it for all terminals, you can add them to any terminal profile. Here they are:
The environment variables:
<code>
EMSDK = < path to emsdk dir >
EM_CONFIG = ~/.emscripten
EMSDK_NODE = < path to emsdk dir >/node/12.9.1_64bit/bin/node
</code>
=== Now just try it! ===
<code>
emcc
</code>
== Build the mujoco_wasm Binary ==
Next, you'll build the MuJoCo WebAssembly binary.
<code>
mkdir build
cd build
emcmake cmake ..
make
</code>
'''Tip:''' If you get an error with "undefined symbol: saveSetjmp/testSetjmp" at the build step, revert to:
<code>
./emsdk install 3.1.56 && ./emsdk activate 3.1.56 && source ./emsdk_env.sh
</code>
a5fbe0bb4b5784d62cd08db011deb4d06d3551c2
1105
1104
2024-05-19T20:00:51Z
Vrtnis
21
/* Build the mujoco_wasm Binary */
wikitext
text/x-wiki
== Install emscripten ==
First, you need to install emscripten, which is a compiler toolchain for WebAssembly.
=== Get the emsdk repo ===
<code>
git clone https://github.com/emscripten-core/emsdk.git
</code>
=== Enter that directory ===
<code>
cd emsdk
</code>
=== Download and install the latest SDK tools ===
<code>
./emsdk install latest
</code>
=== Make the "latest" SDK "active" ===
<code>
./emsdk activate latest
</code>
=== Activate PATH and other environment variables ===
<code>
source ./emsdk_env.sh
</code>
These variables are set for the current terminal. If you want to make it for all terminals, you can add them to any terminal profile. Here they are:
The environment variables:
<code>
EMSDK = < path to emsdk dir >
EM_CONFIG = ~/.emscripten
EMSDK_NODE = < path to emsdk dir >/node/12.9.1_64bit/bin/node
</code>
=== Now just try it! ===
<code>
emcc
</code>
== Build the mujoco_wasm Binary ==
Next, you'll build the MuJoCo WebAssembly binary.
<syntaxhighlight lang="bash">
mkdir build
cd build
emcmake cmake ..
make
</syntaxhighlight>
'''Tip:''' If you get an error with "undefined symbol: saveSetjmp/testSetjmp" at the build step, revert to:
<code>
./emsdk install 3.1.56 && ./emsdk activate 3.1.56 && source ./emsdk_env.sh
</code>
4b7e66d0975441a40ff55c9892af7145d0439181
1107
1105
2024-05-19T20:20:03Z
Vrtnis
21
/* Add screenshot */
wikitext
text/x-wiki
== Install emscripten ==
First, you need to install emscripten, which is a compiler toolchain for WebAssembly.
=== Get the emsdk repo ===
<code>
git clone https://github.com/emscripten-core/emsdk.git
</code>
=== Enter that directory ===
<code>
cd emsdk
</code>
=== Download and install the latest SDK tools ===
<code>
./emsdk install latest
</code>
=== Make the "latest" SDK "active" ===
<code>
./emsdk activate latest
</code>
=== Activate PATH and other environment variables ===
<code>
source ./emsdk_env.sh
</code>
These variables are set for the current terminal. If you want to make it for all terminals, you can add them to any terminal profile. Here they are:
The environment variables:
<code>
EMSDK = < path to emsdk dir >
EM_CONFIG = ~/.emscripten
EMSDK_NODE = < path to emsdk dir >/node/12.9.1_64bit/bin/node
</code>
=== Now just try it! ===
<code>
emcc
</code>
== Build the mujoco_wasm Binary ==
Next, you'll build the MuJoCo WebAssembly binary.
<syntaxhighlight lang="bash">
mkdir build
cd build
emcmake cmake ..
make
</syntaxhighlight>
[[File:Emcmake.png|800px|thumb|none|emcmake cmake ..]]
'''Tip:''' If you get an error with "undefined symbol: saveSetjmp/testSetjmp" at the build step, revert to:
<code>
./emsdk install 3.1.56 && ./emsdk activate 3.1.56 && source ./emsdk_env.sh
</code>
e5cd7387c517b787275df96a8ae1d75d9404c5ec
File:Emcmake.png
6
258
1106
2024-05-19T20:19:24Z
Vrtnis
21
wikitext
text/x-wiki
emcmake
86248b08caa9aaab9ac370c07d55f80e27e5088e
File:Carbon (1).png
6
259
1108
2024-05-19T20:23:51Z
Vrtnis
21
wikitext
text/x-wiki
wasm build
bff69fa2a678fd537b7ca33a54c1263fffb45ed7
MuJoCo WASM
0
257
1109
1107
2024-05-19T20:24:49Z
Vrtnis
21
/* Build the mujoco_wasm Binary */
wikitext
text/x-wiki
== Install emscripten ==
First, you need to install emscripten, which is a compiler toolchain for WebAssembly.
=== Get the emsdk repo ===
<code>
git clone https://github.com/emscripten-core/emsdk.git
</code>
=== Enter that directory ===
<code>
cd emsdk
</code>
=== Download and install the latest SDK tools ===
<code>
./emsdk install latest
</code>
=== Make the "latest" SDK "active" ===
<code>
./emsdk activate latest
</code>
=== Activate PATH and other environment variables ===
<code>
source ./emsdk_env.sh
</code>
These variables are set for the current terminal. If you want to make it for all terminals, you can add them to any terminal profile. Here they are:
The environment variables:
<code>
EMSDK = < path to emsdk dir >
EM_CONFIG = ~/.emscripten
EMSDK_NODE = < path to emsdk dir >/node/12.9.1_64bit/bin/node
</code>
=== Now just try it! ===
<code>
emcc
</code>
== Build the mujoco_wasm Binary ==
Next, you'll build the MuJoCo WebAssembly binary.
<syntaxhighlight lang="bash">
mkdir build
cd build
emcmake cmake ..
make
</syntaxhighlight>
[[File:Carbon (1).png|800px|thumb|none|emcmake cmake ..]]
'''Tip:''' If you get an error with "undefined symbol: saveSetjmp/testSetjmp" at the build step, revert to:
<code>
./emsdk install 3.1.56 && ./emsdk activate 3.1.56 && source ./emsdk_env.sh
</code>
bc5c1125d587c1a6bef1ee3032041aad6dadc0eb
1111
1109
2024-05-19T20:27:20Z
Vrtnis
21
/* Build the mujoco_wasm Binary */
wikitext
text/x-wiki
== Install emscripten ==
First, you need to install emscripten, which is a compiler toolchain for WebAssembly.
=== Get the emsdk repo ===
<code>
git clone https://github.com/emscripten-core/emsdk.git
</code>
=== Enter that directory ===
<code>
cd emsdk
</code>
=== Download and install the latest SDK tools ===
<code>
./emsdk install latest
</code>
=== Make the "latest" SDK "active" ===
<code>
./emsdk activate latest
</code>
=== Activate PATH and other environment variables ===
<code>
source ./emsdk_env.sh
</code>
These variables are set for the current terminal. If you want to make it for all terminals, you can add them to any terminal profile. Here they are:
The environment variables:
<code>
EMSDK = < path to emsdk dir >
EM_CONFIG = ~/.emscripten
EMSDK_NODE = < path to emsdk dir >/node/12.9.1_64bit/bin/node
</code>
=== Now just try it! ===
<code>
emcc
</code>
== Build the mujoco_wasm Binary ==
Next, you'll build the MuJoCo WebAssembly binary.
<syntaxhighlight lang="bash">
mkdir build
cd build
emcmake cmake ..
make
</syntaxhighlight>
[[File:Carbon (1).png|800px|thumb|none|emcmake cmake ..]]
[[File:Carbon (2).png|400px|thumb|none|make]]
'''Tip:''' If you get an error with "undefined symbol: saveSetjmp/testSetjmp" at the build step, revert to:
<code>
./emsdk install 3.1.56 && ./emsdk activate 3.1.56 && source ./emsdk_env.sh
</code>
fc1a06fcc2c170c294189b1396faf2ff437e7ac1
1112
1111
2024-05-19T20:33:03Z
Vrtnis
21
wikitext
text/x-wiki
== Install emscripten ==
First, you need to install emscripten, which is a compiler toolchain for WebAssembly.
=== Get the emsdk repo ===
<code>
git clone https://github.com/emscripten-core/emsdk.git
</code>
=== Enter that directory ===
<code>
cd emsdk
</code>
=== Download and install the latest SDK tools ===
<code>
./emsdk install latest
</code>
=== Make the "latest" SDK "active" ===
<code>
./emsdk activate latest
</code>
=== Activate PATH and other environment variables ===
<code>
source ./emsdk_env.sh
</code>
These variables are set for the current terminal. If you want to make it for all terminals, you can add them to any terminal profile. Here they are:
The environment variables:
<code>
EMSDK = < path to emsdk dir >
EM_CONFIG = ~/.emscripten
EMSDK_NODE = < path to emsdk dir >/node/12.9.1_64bit/bin/node
</code>
=== Now just try it! ===
<code>
emcc
</code>
== Build the mujoco_wasm Binary ==
Next, you'll build the MuJoCo WebAssembly binary.
<syntaxhighlight lang="bash">
mkdir build
cd build
emcmake cmake ..
make
</syntaxhighlight>
[[File:Carbon (1).png|800px|thumb|none|emcmake cmake ..]]
[[File:Carbon (2).png|400px|thumb|none|make]]
'''Tip:''' If you get an error with "undefined symbol: saveSetjmp/testSetjmp" at the build step, revert to:
<code>
./emsdk install 3.1.56 && ./emsdk activate 3.1.56 && source ./emsdk_env.sh
</code>
== Running in Browser ==
Now just do in the root of your mujoco folder
<code>
python -m http.server 8000
</code>
Then navigate to:
<code>
http://localhost:8000/index.html
</code>
03949e76f1b818243e4028cbb856acb25f58928c
1113
1112
2024-05-19T20:33:24Z
Vrtnis
21
/* Running in Browser */
wikitext
text/x-wiki
== Install emscripten ==
First, you need to install emscripten, which is a compiler toolchain for WebAssembly.
=== Get the emsdk repo ===
<code>
git clone https://github.com/emscripten-core/emsdk.git
</code>
=== Enter that directory ===
<code>
cd emsdk
</code>
=== Download and install the latest SDK tools ===
<code>
./emsdk install latest
</code>
=== Make the "latest" SDK "active" ===
<code>
./emsdk activate latest
</code>
=== Activate PATH and other environment variables ===
<code>
source ./emsdk_env.sh
</code>
These variables are set for the current terminal. If you want to make it for all terminals, you can add them to any terminal profile. Here they are:
The environment variables:
<code>
EMSDK = < path to emsdk dir >
EM_CONFIG = ~/.emscripten
EMSDK_NODE = < path to emsdk dir >/node/12.9.1_64bit/bin/node
</code>
=== Now just try it! ===
<code>
emcc
</code>
== Build the mujoco_wasm Binary ==
Next, you'll build the MuJoCo WebAssembly binary.
<syntaxhighlight lang="bash">
mkdir build
cd build
emcmake cmake ..
make
</syntaxhighlight>
[[File:Carbon (1).png|800px|thumb|none|emcmake cmake ..]]
[[File:Carbon (2).png|400px|thumb|none|make]]
'''Tip:''' If you get an error with "undefined symbol: saveSetjmp/testSetjmp" at the build step, revert to:
<code>
./emsdk install 3.1.56 && ./emsdk activate 3.1.56 && source ./emsdk_env.sh
</code>
== Running in Browser ==
Now just do in the root of your mujoco folder
<code>
python -m http.server 8000
</code>
Then navigate to:
<code>
http://localhost:8000/index.html
</code>
1fdc8f30da406f339003663d9da7b4bf1d536131
1115
1113
2024-05-19T20:42:03Z
Vrtnis
21
/* Running in Browser */
wikitext
text/x-wiki
== Install emscripten ==
First, you need to install emscripten, which is a compiler toolchain for WebAssembly.
=== Get the emsdk repo ===
<code>
git clone https://github.com/emscripten-core/emsdk.git
</code>
=== Enter that directory ===
<code>
cd emsdk
</code>
=== Download and install the latest SDK tools ===
<code>
./emsdk install latest
</code>
=== Make the "latest" SDK "active" ===
<code>
./emsdk activate latest
</code>
=== Activate PATH and other environment variables ===
<code>
source ./emsdk_env.sh
</code>
These variables are set for the current terminal. If you want to make it for all terminals, you can add them to any terminal profile. Here they are:
The environment variables:
<code>
EMSDK = < path to emsdk dir >
EM_CONFIG = ~/.emscripten
EMSDK_NODE = < path to emsdk dir >/node/12.9.1_64bit/bin/node
</code>
=== Now just try it! ===
<code>
emcc
</code>
== Build the mujoco_wasm Binary ==
Next, you'll build the MuJoCo WebAssembly binary.
<syntaxhighlight lang="bash">
mkdir build
cd build
emcmake cmake ..
make
</syntaxhighlight>
[[File:Carbon (1).png|800px|thumb|none|emcmake cmake ..]]
[[File:Carbon (2).png|400px|thumb|none|make]]
'''Tip:''' If you get an error with "undefined symbol: saveSetjmp/testSetjmp" at the build step, revert to:
<code>
./emsdk install 3.1.56 && ./emsdk activate 3.1.56 && source ./emsdk_env.sh
</code>
== Running in Browser ==
Now just do in the root of your mujoco folder
<code>
python -m http.server 8000
</code>
Then navigate to:
<code>
http://localhost:8000/index.html
</code>
[[File:Wasm screenshot13-40-40.png|800px|thumb|none|MuJoCo running in browser]]
7caaa8311c256a957122e067e04d4612687a3a44
1116
1115
2024-05-19T21:24:47Z
Vrtnis
21
/* Running in Browser */
wikitext
text/x-wiki
== Install emscripten ==
First, you need to install emscripten, which is a compiler toolchain for WebAssembly.
=== Get the emsdk repo ===
<code>
git clone https://github.com/emscripten-core/emsdk.git
</code>
=== Enter that directory ===
<code>
cd emsdk
</code>
=== Download and install the latest SDK tools ===
<code>
./emsdk install latest
</code>
=== Make the "latest" SDK "active" ===
<code>
./emsdk activate latest
</code>
=== Activate PATH and other environment variables ===
<code>
source ./emsdk_env.sh
</code>
These variables are set for the current terminal. If you want to make it for all terminals, you can add them to any terminal profile. Here they are:
The environment variables:
<code>
EMSDK = < path to emsdk dir >
EM_CONFIG = ~/.emscripten
EMSDK_NODE = < path to emsdk dir >/node/12.9.1_64bit/bin/node
</code>
=== Now just try it! ===
<code>
emcc
</code>
== Build the mujoco_wasm Binary ==
Next, you'll build the MuJoCo WebAssembly binary.
<syntaxhighlight lang="bash">
mkdir build
cd build
emcmake cmake ..
make
</syntaxhighlight>
[[File:Carbon (1).png|800px|thumb|none|emcmake cmake ..]]
[[File:Carbon (2).png|400px|thumb|none|make]]
'''Tip:''' If you get an error with "undefined symbol: saveSetjmp/testSetjmp" at the build step, revert to:
<code>
./emsdk install 3.1.56 && ./emsdk activate 3.1.56 && source ./emsdk_env.sh
</code>
== Running in Browser ==
Run this in your mujoco folder to start a server.
<code>
python -m http.server 8000
</code>
Then navigate to:
<code>
http://localhost:8000/index.html
</code>
[[File:Wasm screenshot13-40-40.png|800px|thumb|none|MuJoCo running in browser]]
84106a7907262faa62db2a6ddf56c528e51ce230
File:Carbon (2).png
6
260
1110
2024-05-19T20:26:40Z
Vrtnis
21
wikitext
text/x-wiki
make
5821eb27d7b71c9078000da31a5a654c97e401b9
File:Wasm screenshot13-40-40.png
6
261
1114
2024-05-19T20:41:22Z
Vrtnis
21
wikitext
text/x-wiki
mujoco wasm screenshot
a53efe8f5741b3111faa2da0556dad3a0022dc66
User:Is2ac
2
262
1117
2024-05-19T23:39:04Z
Is2ac
30
Created page with " {{infobox person | name = Isaac Light | organization = [[K-Scale Labs]] | title = Employee }} [[Category: K-Scale Employees]]"
wikitext
text/x-wiki
{{infobox person
| name = Isaac Light
| organization = [[K-Scale Labs]]
| title = Employee
}}
[[Category: K-Scale Employees]]
dd118feb47faa916a98a790323063aaca8b09b2d
Getting Started with Machine Learning
0
263
1118
2024-05-20T01:29:17Z
Ben
2
Created page with "This is [[User:Ben]]'s guide to getting started with machine learning. === Dependencies === Here's some useful dependencies that I use: * [https://astral.sh/blog/uv uv] **..."
wikitext
text/x-wiki
This is [[User:Ben]]'s guide to getting started with machine learning.
=== Dependencies ===
Here's some useful dependencies that I use:
* [https://astral.sh/blog/uv uv]
** This is similar to Pip but written in Rust and is way faster
** It has nice management of virtual environments
** Can use Conda instead but it is much slower
* [https://github.com/features/copilot Github Copilot]
* [https://github.com/kscalelabs/mlfab mlfab]
** This is a Python package I made to help make it easy to quickly try out machine learning ideas in PyTorch
=== Installing Starter Project ===
* Go to [https://github.com/kscalelabs/getting-started this project] and install it
==== Opening the project in VSCode ====
* Create a VSCode config file that looks something like this:
<syntaxhighlight lang="json">
{
"folders": [
{
"name": "Getting Started",
"path": "/home/ubuntu/Github/getting_started"
},
{
"name": "Workspaces",
"path": "/home/ubuntu/.code-workspaces"
}
],
"settings": {
"cmake.configureSettings": {
"CMAKE_CUDA_COMPILER": "/usr/bin/nvcc",
"CMAKE_PREFIX_PATH": [
"/home/ubuntu/.virtualenvs/getting-started/lib/python3.11/site-packages/torch/share/cmake"
],
"PYTHON_EXECUTABLE": "/home/ubuntu/.virtualenvs/getting-started/bin/python",
"TORCH_CUDA_ARCH_LIST": "'8.0'"
},
"python.defaultInterpreterPath": "/home/ubuntu/.virtualenvs/getting-started/bin/python",
"ruff.path": [
"/home/ubuntu/.virtualenvs/getting-started/bin/ruff"
]
}
}
</syntaxhighlight>
* Install the [https://code.visualstudio.com/docs/remote/ssh VSCode SSH extension]
* SSH into the cluster (see [[K-Scale Cluster]] for instructions)
* Open the workspace that you created in VSCode
62db86cfd15f4ab215f53ee2dc42a3d569861926
1119
1118
2024-05-20T01:41:02Z
Ben
2
wikitext
text/x-wiki
This is [[User:Ben]]'s guide to getting started with machine learning.
=== Dependencies ===
Here's some useful dependencies that I use:
* [https://astral.sh/blog/uv uv]
** This is similar to Pip but written in Rust and is way faster
** It has nice management of virtual environments
** Can use Conda instead but it is much slower
* [https://github.com/features/copilot Github Copilot]
* [https://github.com/kscalelabs/mlfab mlfab]
** This is a Python package I made to help make it easy to quickly try out machine learning ideas in PyTorch
* Coding tools
** [https://mypy-lang.org/ mypy] static analysis
** [https://github.com/psf/black black] code formatter
** [https://docs.astral.sh/ruff/ ruff] alternative to flake8
=== Installing Starter Project ===
* Go to [https://github.com/kscalelabs/getting-started this project] and install it
==== Opening the project in VSCode ====
* Create a VSCode config file that looks something like this:
<syntaxhighlight lang="json">
{
"folders": [
{
"name": "Getting Started",
"path": "/home/ubuntu/Github/getting_started"
},
{
"name": "Workspaces",
"path": "/home/ubuntu/.code-workspaces"
}
],
"settings": {
"cmake.configureSettings": {
"CMAKE_CUDA_COMPILER": "/usr/bin/nvcc",
"CMAKE_PREFIX_PATH": [
"/home/ubuntu/.virtualenvs/getting-started/lib/python3.11/site-packages/torch/share/cmake"
],
"PYTHON_EXECUTABLE": "/home/ubuntu/.virtualenvs/getting-started/bin/python",
"TORCH_CUDA_ARCH_LIST": "'8.0'"
},
"python.defaultInterpreterPath": "/home/ubuntu/.virtualenvs/getting-started/bin/python",
"ruff.path": [
"/home/ubuntu/.virtualenvs/getting-started/bin/ruff"
]
}
}
</syntaxhighlight>
* Install the [https://code.visualstudio.com/docs/remote/ssh VSCode SSH extension]
* SSH into the cluster (see [[K-Scale Cluster]] for instructions)
* Open the workspace that you created in VSCode
=== Useful Brain Dump Stuff ===
* Use <code>breakpoint()</code> to debug code
75ead204c219c5c1eaddc5ebc369352857364813
1120
1119
2024-05-20T01:47:14Z
Ben
2
wikitext
text/x-wiki
This is [[User:Ben]]'s guide to getting started with machine learning.
=== Dependencies ===
Here's some useful dependencies that I use:
* [https://astral.sh/blog/uv uv]
** This is similar to Pip but written in Rust and is way faster
** It has nice management of virtual environments
** Can use Conda instead but it is much slower
* [https://github.com/features/copilot Github Copilot]
* [https://github.com/kscalelabs/mlfab mlfab]
** This is a Python package I made to help make it easy to quickly try out machine learning ideas in PyTorch
* Coding tools
** [https://mypy-lang.org/ mypy] static analysis
** [https://github.com/psf/black black] code formatter
** [https://docs.astral.sh/ruff/ ruff] alternative to flake8
=== Installing Starter Project ===
* Go to [https://github.com/kscalelabs/getting-started this project] and install it
==== Opening the project in VSCode ====
* Create a VSCode config file that looks something like this:
<syntaxhighlight lang="json">
{
"folders": [
{
"name": "Getting Started",
"path": "/home/ubuntu/Github/getting_started"
},
{
"name": "Workspaces",
"path": "/home/ubuntu/.code-workspaces"
}
],
"settings": {
"cmake.configureSettings": {
"CMAKE_CUDA_COMPILER": "/usr/bin/nvcc",
"CMAKE_PREFIX_PATH": [
"/home/ubuntu/.virtualenvs/getting-started/lib/python3.11/site-packages/torch/share/cmake"
],
"PYTHON_EXECUTABLE": "/home/ubuntu/.virtualenvs/getting-started/bin/python",
"TORCH_CUDA_ARCH_LIST": "'8.0'"
},
"python.defaultInterpreterPath": "/home/ubuntu/.virtualenvs/getting-started/bin/python",
"ruff.path": [
"/home/ubuntu/.virtualenvs/getting-started/bin/ruff"
]
}
}
</syntaxhighlight>
* Install the [https://code.visualstudio.com/docs/remote/ssh VSCode SSH extension]
* SSH into the cluster (see [[K-Scale Cluster]] for instructions)
* Open the workspace that you created in VSCode
=== Useful Brain Dump Stuff ===
* Use <code>breakpoint()</code> to debug code
* Check out the [https://github.com/kscalelabs/mlfab/tree/master/examples mlfab examples directory] for some ideas
* It is a good idea to try to write the full training loop yourself to figure out what's going on
f31225302799a017de2cd1d52c40d956c2880e9f
1126
1120
2024-05-20T03:11:12Z
Dennisc
27
wikitext
text/x-wiki
This is [[User:Ben]]'s guide to getting started with machine learning.
=== Dependencies ===
Here's some useful dependencies that I use:
* [https://astral.sh/blog/uv uv]
** This is similar to Pip but written in Rust and is way faster
** It has nice management of virtual environments
** Can use Conda instead but it is much slower
* [https://github.com/features/copilot Github Copilot]
* [https://github.com/kscalelabs/mlfab mlfab]
** This is a Python package I made to help make it easy to quickly try out machine learning ideas in PyTorch
* Coding tools
** [https://mypy-lang.org/ mypy] static analysis
** [https://github.com/psf/black black] code formatter
** [https://docs.astral.sh/ruff/ ruff] alternative to flake8
=== Installing Starter Project ===
* Go to [https://github.com/kscalelabs/getting-started this project] and install it
==== Opening the project in VSCode ====
* Create a VSCode config file that looks something like this:
<syntaxhighlight lang="json">
{
"folders": [
{
"name": "Getting Started",
"path": "/home/ubuntu/Github/getting_started"
},
{
"name": "Workspaces",
"path": "/home/ubuntu/.code-workspaces"
}
],
"settings": {
"cmake.configureSettings": {
"CMAKE_CUDA_COMPILER": "/usr/bin/nvcc",
"CMAKE_PREFIX_PATH": [
"/home/ubuntu/.virtualenvs/getting-started/lib/python3.11/site-packages/torch/share/cmake"
],
"PYTHON_EXECUTABLE": "/home/ubuntu/.virtualenvs/getting-started/bin/python",
"TORCH_CUDA_ARCH_LIST": "'8.0'"
},
"python.defaultInterpreterPath": "/home/ubuntu/.virtualenvs/getting-started/bin/python",
"ruff.path": [
"/home/ubuntu/.virtualenvs/getting-started/bin/ruff"
]
}
}
</syntaxhighlight>
* Install the [https://code.visualstudio.com/docs/remote/ssh VSCode SSH extension]
* SSH into the cluster (see [[K-Scale Cluster]] for instructions)
* Open the workspace that you created in VSCode
=== Useful Brain Dump Stuff ===
* Use <code>breakpoint()</code> to debug code
* Check out the [https://github.com/kscalelabs/mlfab/tree/master/examples mlfab examples directory] for some ideas
* It is a good idea to try to write the full training loop yourself to figure out what's going on
* Run `nvidia-smi` to see the GPUs and their statuses/any active processes
722041ea6c7159b5982c148a4deb7f54f135c579
1127
1126
2024-05-20T03:11:30Z
Dennisc
27
wikitext
text/x-wiki
This is [[User:Ben]]'s guide to getting started with machine learning.
=== Dependencies ===
Here's some useful dependencies that I use:
* [https://astral.sh/blog/uv uv]
** This is similar to Pip but written in Rust and is way faster
** It has nice management of virtual environments
** Can use Conda instead but it is much slower
* [https://github.com/features/copilot Github Copilot]
* [https://github.com/kscalelabs/mlfab mlfab]
** This is a Python package I made to help make it easy to quickly try out machine learning ideas in PyTorch
* Coding tools
** [https://mypy-lang.org/ mypy] static analysis
** [https://github.com/psf/black black] code formatter
** [https://docs.astral.sh/ruff/ ruff] alternative to flake8
=== Installing Starter Project ===
* Go to [https://github.com/kscalelabs/getting-started this project] and install it
==== Opening the project in VSCode ====
* Create a VSCode config file that looks something like this:
<syntaxhighlight lang="json">
{
"folders": [
{
"name": "Getting Started",
"path": "/home/ubuntu/Github/getting_started"
},
{
"name": "Workspaces",
"path": "/home/ubuntu/.code-workspaces"
}
],
"settings": {
"cmake.configureSettings": {
"CMAKE_CUDA_COMPILER": "/usr/bin/nvcc",
"CMAKE_PREFIX_PATH": [
"/home/ubuntu/.virtualenvs/getting-started/lib/python3.11/site-packages/torch/share/cmake"
],
"PYTHON_EXECUTABLE": "/home/ubuntu/.virtualenvs/getting-started/bin/python",
"TORCH_CUDA_ARCH_LIST": "'8.0'"
},
"python.defaultInterpreterPath": "/home/ubuntu/.virtualenvs/getting-started/bin/python",
"ruff.path": [
"/home/ubuntu/.virtualenvs/getting-started/bin/ruff"
]
}
}
</syntaxhighlight>
* Install the [https://code.visualstudio.com/docs/remote/ssh VSCode SSH extension]
* SSH into the cluster (see [[K-Scale Cluster]] for instructions)
* Open the workspace that you created in VSCode
=== Useful Brain Dump Stuff ===
* Use <code>breakpoint()</code> to debug code
* Check out the [https://github.com/kscalelabs/mlfab/tree/master/examples mlfab examples directory] for some ideas
* It is a good idea to try to write the full training loop yourself to figure out what's going on
* Run <code>nvidia-smi</code> to see the GPUs and their statuses/any active processes
48682cf497d4672309f1c78908fc1aa8b51b6c91
1128
1127
2024-05-20T03:19:45Z
Dennisc
27
skeleton of uv
wikitext
text/x-wiki
This is [[User:Ben]]'s guide to getting started with machine learning.
=== Dependencies ===
Here's some useful dependencies that I use:
* [https://astral.sh/blog/uv uv]
** This is similar to Pip but written in Rust and is way faster
** It has nice management of virtual environments
** Can use Conda instead but it is much slower
* [https://github.com/features/copilot Github Copilot]
* [https://github.com/kscalelabs/mlfab mlfab]
** This is a Python package I made to help make it easy to quickly try out machine learning ideas in PyTorch
* Coding tools
** [https://mypy-lang.org/ mypy] static analysis
** [https://github.com/psf/black black] code formatter
** [https://docs.astral.sh/ruff/ ruff] alternative to flake8
==== uv ====
To get started with <code>uv</code>, pick a directory you want your virtual environment to live in. (<code>$HOME</code> is not recommended.) Once you have <code>cd</code>ed there, run
<syntaxhighlight lang="bash">
uv venv
</syntaxhighlight>
To activate your virtual environment, run
<syntaxhighlight lang="bash">
source .venv/bin/activate
</syntaxhighlight>
*while in the directory you created your <code>.venv</code> in*.
=== Installing Starter Project ===
* Go to [https://github.com/kscalelabs/getting-started this project] and install it
==== Opening the project in VSCode ====
* Create a VSCode config file that looks something like this:
<syntaxhighlight lang="json">
{
"folders": [
{
"name": "Getting Started",
"path": "/home/ubuntu/Github/getting_started"
},
{
"name": "Workspaces",
"path": "/home/ubuntu/.code-workspaces"
}
],
"settings": {
"cmake.configureSettings": {
"CMAKE_CUDA_COMPILER": "/usr/bin/nvcc",
"CMAKE_PREFIX_PATH": [
"/home/ubuntu/.virtualenvs/getting-started/lib/python3.11/site-packages/torch/share/cmake"
],
"PYTHON_EXECUTABLE": "/home/ubuntu/.virtualenvs/getting-started/bin/python",
"TORCH_CUDA_ARCH_LIST": "'8.0'"
},
"python.defaultInterpreterPath": "/home/ubuntu/.virtualenvs/getting-started/bin/python",
"ruff.path": [
"/home/ubuntu/.virtualenvs/getting-started/bin/ruff"
]
}
}
</syntaxhighlight>
* Install the [https://code.visualstudio.com/docs/remote/ssh VSCode SSH extension]
* SSH into the cluster (see [[K-Scale Cluster]] for instructions)
* Open the workspace that you created in VSCode
=== Useful Brain Dump Stuff ===
* Use <code>breakpoint()</code> to debug code
* Check out the [https://github.com/kscalelabs/mlfab/tree/master/examples mlfab examples directory] for some ideas
* It is a good idea to try to write the full training loop yourself to figure out what's going on
* Run <code>nvidia-smi</code> to see the GPUs and their statuses/any active processes
fda062259e6fccf9c4d3ef5563c4d1018ae1cdad
1129
1128
2024-05-20T03:20:08Z
Dennisc
27
fix bold
wikitext
text/x-wiki
This is [[User:Ben]]'s guide to getting started with machine learning.
=== Dependencies ===
Here's some useful dependencies that I use:
* [https://astral.sh/blog/uv uv]
** This is similar to Pip but written in Rust and is way faster
** It has nice management of virtual environments
** Can use Conda instead but it is much slower
* [https://github.com/features/copilot Github Copilot]
* [https://github.com/kscalelabs/mlfab mlfab]
** This is a Python package I made to help make it easy to quickly try out machine learning ideas in PyTorch
* Coding tools
** [https://mypy-lang.org/ mypy] static analysis
** [https://github.com/psf/black black] code formatter
** [https://docs.astral.sh/ruff/ ruff] alternative to flake8
==== uv ====
To get started with <code>uv</code>, pick a directory you want your virtual environment to live in. (<code>$HOME</code> is not recommended.) Once you have <code>cd</code>ed there, run
<syntaxhighlight lang="bash">
uv venv
</syntaxhighlight>
To activate your virtual environment, run
<syntaxhighlight lang="bash">
source .venv/bin/activate
</syntaxhighlight>
'''while in the directory you created your <code>.venv</code> in'''.
=== Installing Starter Project ===
* Go to [https://github.com/kscalelabs/getting-started this project] and install it
==== Opening the project in VSCode ====
* Create a VSCode config file that looks something like this:
<syntaxhighlight lang="json">
{
"folders": [
{
"name": "Getting Started",
"path": "/home/ubuntu/Github/getting_started"
},
{
"name": "Workspaces",
"path": "/home/ubuntu/.code-workspaces"
}
],
"settings": {
"cmake.configureSettings": {
"CMAKE_CUDA_COMPILER": "/usr/bin/nvcc",
"CMAKE_PREFIX_PATH": [
"/home/ubuntu/.virtualenvs/getting-started/lib/python3.11/site-packages/torch/share/cmake"
],
"PYTHON_EXECUTABLE": "/home/ubuntu/.virtualenvs/getting-started/bin/python",
"TORCH_CUDA_ARCH_LIST": "'8.0'"
},
"python.defaultInterpreterPath": "/home/ubuntu/.virtualenvs/getting-started/bin/python",
"ruff.path": [
"/home/ubuntu/.virtualenvs/getting-started/bin/ruff"
]
}
}
</syntaxhighlight>
* Install the [https://code.visualstudio.com/docs/remote/ssh VSCode SSH extension]
* SSH into the cluster (see [[K-Scale Cluster]] for instructions)
* Open the workspace that you created in VSCode
=== Useful Brain Dump Stuff ===
* Use <code>breakpoint()</code> to debug code
* Check out the [https://github.com/kscalelabs/mlfab/tree/master/examples mlfab examples directory] for some ideas
* It is a good idea to try to write the full training loop yourself to figure out what's going on
* Run <code>nvidia-smi</code> to see the GPUs and their statuses/any active processes
70ded971b6eeee4d1b667011b3ec82e7df254c1a
1130
1129
2024-05-20T03:23:58Z
Dennisc
27
How to install uv on cluster
wikitext
text/x-wiki
This is [[User:Ben]]'s guide to getting started with machine learning.
=== Dependencies ===
Here's some useful dependencies that I use:
* [https://astral.sh/blog/uv uv]
** This is similar to Pip but written in Rust and is way faster
** It has nice management of virtual environments
** Can use Conda instead but it is much slower
* [https://github.com/features/copilot Github Copilot]
* [https://github.com/kscalelabs/mlfab mlfab]
** This is a Python package I made to help make it easy to quickly try out machine learning ideas in PyTorch
* Coding tools
** [https://mypy-lang.org/ mypy] static analysis
** [https://github.com/psf/black black] code formatter
** [https://docs.astral.sh/ruff/ ruff] alternative to flake8
==== uv ====
To install <code>uv</code> on the K-Scale clusters, run
<syntaxhighlight lang="bash">
curl -LsSf https://astral.sh/uv/install.sh | sh
</syntaxhighlight>
To get started with <code>uv</code>, pick a directory you want your virtual environment to live in. (<code>$HOME</code> is not recommended.) Once you have <code>cd</code>ed there, run
<syntaxhighlight lang="bash">
uv venv
</syntaxhighlight>
To activate your virtual environment, run
<syntaxhighlight lang="bash">
source .venv/bin/activate
</syntaxhighlight>
'''while in the directory you created your <code>.venv</code> in'''.
=== Installing Starter Project ===
* Go to [https://github.com/kscalelabs/getting-started this project] and install it
==== Opening the project in VSCode ====
* Create a VSCode config file that looks something like this:
<syntaxhighlight lang="json">
{
"folders": [
{
"name": "Getting Started",
"path": "/home/ubuntu/Github/getting_started"
},
{
"name": "Workspaces",
"path": "/home/ubuntu/.code-workspaces"
}
],
"settings": {
"cmake.configureSettings": {
"CMAKE_CUDA_COMPILER": "/usr/bin/nvcc",
"CMAKE_PREFIX_PATH": [
"/home/ubuntu/.virtualenvs/getting-started/lib/python3.11/site-packages/torch/share/cmake"
],
"PYTHON_EXECUTABLE": "/home/ubuntu/.virtualenvs/getting-started/bin/python",
"TORCH_CUDA_ARCH_LIST": "'8.0'"
},
"python.defaultInterpreterPath": "/home/ubuntu/.virtualenvs/getting-started/bin/python",
"ruff.path": [
"/home/ubuntu/.virtualenvs/getting-started/bin/ruff"
]
}
}
</syntaxhighlight>
* Install the [https://code.visualstudio.com/docs/remote/ssh VSCode SSH extension]
* SSH into the cluster (see [[K-Scale Cluster]] for instructions)
* Open the workspace that you created in VSCode
=== Useful Brain Dump Stuff ===
* Use <code>breakpoint()</code> to debug code
* Check out the [https://github.com/kscalelabs/mlfab/tree/master/examples mlfab examples directory] for some ideas
* It is a good idea to try to write the full training loop yourself to figure out what's going on
* Run <code>nvidia-smi</code> to see the GPUs and their statuses/any active processes
c1414a6754df6d02dda2b53e9cb9f3b28a967cda
1131
1130
2024-05-20T03:33:05Z
Dennisc
27
Explain uv venv --python 3.11 flag and why it is important/why one might want to use it.
wikitext
text/x-wiki
This is [[User:Ben]]'s guide to getting started with machine learning.
=== Dependencies ===
Here's some useful dependencies that I use:
* [https://astral.sh/blog/uv uv]
** This is similar to Pip but written in Rust and is way faster
** It has nice management of virtual environments
** Can use Conda instead but it is much slower
* [https://github.com/features/copilot Github Copilot]
* [https://github.com/kscalelabs/mlfab mlfab]
** This is a Python package I made to help make it easy to quickly try out machine learning ideas in PyTorch
* Coding tools
** [https://mypy-lang.org/ mypy] static analysis
** [https://github.com/psf/black black] code formatter
** [https://docs.astral.sh/ruff/ ruff] alternative to flake8
==== uv ====
To install <code>uv</code> on the K-Scale clusters, run
<syntaxhighlight lang="bash">
curl -LsSf https://astral.sh/uv/install.sh | sh
</syntaxhighlight>
To get started with <code>uv</code>, pick a directory you want your virtual environment to live in. (<code>$HOME</code> is not recommended.) Once you have <code>cd</code>ed there, run
<syntaxhighlight lang="bash">
uv venv
</syntaxhighlight>
'''If you are on the clusters''', you instead may want to run
<syntaxhighlight lang="bash">
uv venv --python 3.11
</syntaxhighlight>
to ensure that the virtual environment uses Python 3.11. This is because by default, uv uses the system's version of Python (whatever the result of <code>which python</code> yields), and the clusters are running Python 3.10.12. (Python 3.11 will be useful because various projects, including the starter project, will require Python 3.11.)
To activate your virtual environment, run
<syntaxhighlight lang="bash">
source .venv/bin/activate
</syntaxhighlight>
'''while in the directory you created your <code>.venv</code> in'''.
=== Installing Starter Project ===
* Go to [https://github.com/kscalelabs/getting-started this project] and install it
==== Opening the project in VSCode ====
* Create a VSCode config file that looks something like this:
<syntaxhighlight lang="json">
{
"folders": [
{
"name": "Getting Started",
"path": "/home/ubuntu/Github/getting_started"
},
{
"name": "Workspaces",
"path": "/home/ubuntu/.code-workspaces"
}
],
"settings": {
"cmake.configureSettings": {
"CMAKE_CUDA_COMPILER": "/usr/bin/nvcc",
"CMAKE_PREFIX_PATH": [
"/home/ubuntu/.virtualenvs/getting-started/lib/python3.11/site-packages/torch/share/cmake"
],
"PYTHON_EXECUTABLE": "/home/ubuntu/.virtualenvs/getting-started/bin/python",
"TORCH_CUDA_ARCH_LIST": "'8.0'"
},
"python.defaultInterpreterPath": "/home/ubuntu/.virtualenvs/getting-started/bin/python",
"ruff.path": [
"/home/ubuntu/.virtualenvs/getting-started/bin/ruff"
]
}
}
</syntaxhighlight>
* Install the [https://code.visualstudio.com/docs/remote/ssh VSCode SSH extension]
* SSH into the cluster (see [[K-Scale Cluster]] for instructions)
* Open the workspace that you created in VSCode
=== Useful Brain Dump Stuff ===
* Use <code>breakpoint()</code> to debug code
* Check out the [https://github.com/kscalelabs/mlfab/tree/master/examples mlfab examples directory] for some ideas
* It is a good idea to try to write the full training loop yourself to figure out what's going on
* Run <code>nvidia-smi</code> to see the GPUs and their statuses/any active processes
7aebade2c72b449559a1769fc75d0cd5689410ee
Main Page
0
1
1121
1079
2024-05-20T02:41:17Z
Modeless
7
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Haier]]
|
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
bcaf875e1a5c513b2b09f73db22b75ede78deca0
Haier
0
264
1122
2024-05-20T02:46:39Z
Modeless
7
Created page with "Haier is a Chinese manufacturer of appliances and consumer electronics. There's [a video of a Haier-branded humanoid here](https://www.youtube.com/watch?v=7Ve9UTUJZG8), althou..."
wikitext
text/x-wiki
Haier is a Chinese manufacturer of appliances and consumer electronics. There's [a video of a Haier-branded humanoid here](https://www.youtube.com/watch?v=7Ve9UTUJZG8), although there are no further details.
1b3a9eca71600d4d908d475d026d8fb815f5b371
1123
1122
2024-05-20T02:47:15Z
Modeless
7
wikitext
text/x-wiki
Haier is a Chinese manufacturer of appliances and consumer electronics. There's [[a video of a Haier-branded humanoid here|https://www.youtube.com/watch?v=7Ve9UTUJZG8]], although there are no further details.
7908355773ae8ec82b2d850bbd381bbaea3efe4f
1124
1123
2024-05-20T02:47:27Z
Modeless
7
wikitext
text/x-wiki
Haier is a Chinese manufacturer of appliances and consumer electronics. There's [[https://www.youtube.com/watch?v=7Ve9UTUJZG8|a video of a Haier-branded humanoid here]], although there are no further details.
f56cbdabdf3c8f734fb876a3fa6354d07b615e74
1125
1124
2024-05-20T02:47:38Z
Modeless
7
wikitext
text/x-wiki
Haier is a Chinese manufacturer of appliances and consumer electronics. There's [https://www.youtube.com/watch?v=7Ve9UTUJZG8|a video of a Haier-branded humanoid here], although there are no further details.
13101097ee30704d2ec578cba5809df75f97d22d
Setting Up MediaWiki on AWS
0
265
1132
2024-05-20T19:46:21Z
Ben
2
Created page with "This document contains a walk-through of how to set up MediaWiki on AWS. In total, this process should take about 30 minutes. === Getting Started === # Use [https://aws.amaz..."
wikitext
text/x-wiki
This document contains a walk-through of how to set up MediaWiki on AWS. In total, this process should take about 30 minutes.
=== Getting Started ===
# Use [https://aws.amazon.com/marketplace/pp/prodview-3tokjpxwvddp2 this service] to install MediaWiki to an EC2 instance
# After installing, SSH into the instance. The MediaWiki root files live in <code>/var/www/html</code>, and Apache configuration files live in <code>/etc/apache2</code>
=== Set up A Record ===
In your website DNS, create an A record which points from your desired URL to the IP address of the newly-created EC2 instance. For example:
* '''Host name''': <code>@</code> (for the main domains) or <code>wiki</code> (for subdomains)
* '''Type''': <code>A</code>
* '''Data''': <code>127.0.0.1</code> (use the IP address of the EC2 instance)
=== Installing OpenSSL ===
Roughly follow [https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-20-04 these instructions] to install OpenSSL using LetsEncrypt
# Install certbot
<syntaxhighlight lang="bash">
sudo apt install certbot python3-certbot-apache
</syntaxhighlight>
# Run certbot (you can just select "No redirect"
<syntaxhighlight lang="bash">
sudo certbot --apache
</syntaxhighlight>
# Verify certbot
<syntaxhighlight lang="bash">
sudo systemctl status certbot.timer
</syntaxhighlight>
=== Update Page URLs ===
To get MediaWiki to display pages using the <code>/w/Some_Page</code> URL, modify <code>LocalSettings.php</code>. Replace the line where it says <code>$wgScriptPage = "";</code> with the following:
<syntaxhighlight lang="php">
$wgScriptPath = "";
$wgArticlePath = "/w/$1";
$wgUsePathInfo = true;
</syntaxhighlight>
Next, in <code>/etc/apache2/apache2.conf</code>, add the following section:
<syntaxhighlight lang="text">
<Directory /var/www/html>
AllowOverride all
</Directory>
</syntaxhighlight>
Create a new <code>.htaccess</code> in <code>/var/www/html</code> with the following contents:
<syntaxhighlight lang="text">
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^w/(.*)$ index.php/$1 [PT,L,QSA]
</syntaxhighlight>
Finally, reload Apache2:
<syntaxhighlight lang="bash">
sudo service apache2 reload
</syntaxhighlight>
=== Extras ===
Here are some notes about some extra features you can add on top of MediaWiki.
==== Storing images in S3 with Cloudfront ====
You can use the [https://www.mediawiki.org/wiki/Extension:AWS AWS MediaWiki Extension] to allow users to upload images and files, store them in S3, and serve them through CloudFront.
This process is somewhat involved. I may write notes about how to do this well in the future.
d03b04998fef32b797f4648ec7854c04b05dea5b
JPL Robotics Meeting Notes
0
266
1133
2024-05-20T19:54:28Z
Vrtnis
21
Created page with "[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)"
wikitext
text/x-wiki
[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)
fe184d1f26022168d3e1225e568c8289f6ab8b5e
1134
1133
2024-05-20T19:55:12Z
Vrtnis
21
/*mars rover */
wikitext
text/x-wiki
[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)
=== Mars rover autonomy ===
* combine approximate kinematic settling with a dual-cost path planner to navigate safely in unstructured environments.
* develop multi-agent capabilities to significantly enhance performance and safety of autonomous operations on Mars.
* engineer the autonomy system to efficiently handle Mars' unpredictable terrain, improving mission success rates.
* incorporate mechanisms for self-diagnosis and repair to ensure long-term functionality with minimal Earth-based support.
44f6a96d1f447ff4f9ad32e5a9fdd77581a2ff5f
1135
1134
2024-05-20T19:56:54Z
Vrtnis
21
wikitext
text/x-wiki
[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)
=== Mars rover autonomy ===
* combine approximate kinematic settling with a dual-cost path planner to navigate safely in unstructured environments.
* develop multi-agent capabilities to significantly enhance performance and safety of autonomous operations on Mars.
* engineer the autonomy system to efficiently handle Mars' unpredictable terrain, improving mission success rates.
* incorporate mechanisms for self-diagnosis and repair to ensure long-term functionality with minimal Earth-based support.
=== risk-aware planning ===
* utilize Boole's inequality for risk allocation.
* enhancing decision-making under uncertainty.
* employ predictive analytics to anticipate and mitigate potential risks.
* allowing for dynamic re-planning and risk management.
c91c0f3351de90272bb45c5889ddd57760da98e8
1136
1135
2024-05-20T19:57:02Z
Vrtnis
21
/* risk-aware planning */
wikitext
text/x-wiki
[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)
=== Mars rover autonomy ===
* combine approximate kinematic settling with a dual-cost path planner to navigate safely in unstructured environments.
* develop multi-agent capabilities to significantly enhance performance and safety of autonomous operations on Mars.
* engineer the autonomy system to efficiently handle Mars' unpredictable terrain, improving mission success rates.
* incorporate mechanisms for self-diagnosis and repair to ensure long-term functionality with minimal Earth-based support.
=== Risk-aware planning ===
* utilize Boole's inequality for risk allocation.
* enhancing decision-making under uncertainty.
* employ predictive analytics to anticipate and mitigate potential risks.
* allowing for dynamic re-planning and risk management.
cdcba16b8c3150efaabcf4c0db35fc3273d155c1
1137
1136
2024-05-20T19:57:07Z
Vrtnis
21
wikitext
text/x-wiki
[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)
=== Mars rover autonomy ===
* combine approximate kinematic settling with a dual-cost path planner to navigate safely in unstructured environments.
* develop multi-agent capabilities to significantly enhance performance and safety of autonomous operations on Mars.
* engineer the autonomy system to efficiently handle Mars' unpredictable terrain, improving mission success rates.
* incorporate mechanisms for self-diagnosis and repair to ensure long-term functionality with minimal Earth-based support.
=== Risk-aware planning ===
* utilize Boole's inequality for risk allocation.
* enhancing decision-making under uncertainty.
* employ predictive analytics to anticipate and mitigate potential risks.
* allowing for dynamic re-planning and risk management.
4134a20b0eccf641b326fa0f2ab65f03a2341c7e
1139
1137
2024-05-20T20:00:51Z
Vrtnis
21
/*add slide*/
wikitext
text/x-wiki
[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)
=== Mars rover autonomy ===
* combine approximate kinematic settling with a dual-cost path planner to navigate safely in unstructured environments.
* develop multi-agent capabilities to significantly enhance performance and safety of autonomous operations on Mars.
* engineer the autonomy system to efficiently handle Mars' unpredictable terrain, improving mission success rates.
* incorporate mechanisms for self-diagnosis and repair to ensure long-term functionality with minimal Earth-based support.
[[File:Screenshot12-58-42.png|800px|thumb|none|Summary]]
=== Risk-aware planning ===
* utilize Boole's inequality for risk allocation.
* enhancing decision-making under uncertainty.
* employ predictive analytics to anticipate and mitigate potential risks.
* allowing for dynamic re-planning and risk management.
fcc5677dfc8747e7c8d1018291086411a0a1d7c2
1140
1139
2024-05-20T20:02:24Z
Vrtnis
21
/*sensor integration*/
wikitext
text/x-wiki
[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)
=== Mars rover autonomy ===
* combine approximate kinematic settling with a dual-cost path planner to navigate safely in unstructured environments.
* develop multi-agent capabilities to significantly enhance performance and safety of autonomous operations on Mars.
* engineer the autonomy system to efficiently handle Mars' unpredictable terrain, improving mission success rates.
* incorporate mechanisms for self-diagnosis and repair to ensure long-term functionality with minimal Earth-based support.
[[File:Screenshot12-58-42.png|800px|thumb|none|Summary]]
=== Risk-aware planning ===
* utilize Boole's inequality for risk allocation.
* enhancing decision-making under uncertainty.
* employ predictive analytics to anticipate and mitigate potential risks.
* allowing for dynamic re-planning and risk management.
=== Advanced sensor integration ===
* high-resolution cameras and spectrometers.
* integrate sensors for comprehensive environmental data.
* using ground-penetrating radar (GPR).
18d4fa33c088111e0adefde2dbfb7ebc83ae32b5
1141
1140
2024-05-20T20:02:33Z
Vrtnis
21
wikitext
text/x-wiki
[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)
=== Mars rover autonomy ===
* combine approximate kinematic settling with a dual-cost path planner to navigate safely in unstructured environments.
* develop multi-agent capabilities to significantly enhance performance and safety of autonomous operations on Mars.
* engineer the autonomy system to efficiently handle Mars' unpredictable terrain, improving mission success rates.
* incorporate mechanisms for self-diagnosis and repair to ensure long-term functionality with minimal Earth-based support.
[[File:Screenshot12-58-42.png|800px|thumb|none|Summary]]
=== Risk-aware planning ===
* utilize Boole's inequality for risk allocation.
* enhancing decision-making under uncertainty.
* employ predictive analytics to anticipate and mitigate potential risks.
* allowing for dynamic re-planning and risk management.
=== Advanced sensor integration ===
* high-resolution cameras and spectrometers.
* integrate sensors for comprehensive environmental data.
* using ground-penetrating radar (GPR).
ddbe914e20dfde5db892802ae4254910fdcc22f0
1142
1141
2024-05-20T20:14:55Z
Vrtnis
21
/*Energy management and path planning*/
wikitext
text/x-wiki
[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)
=== Mars rover autonomy ===
* combine approximate kinematic settling with a dual-cost path planner to navigate safely in unstructured environments.
* develop multi-agent capabilities to significantly enhance performance and safety of autonomous operations on Mars.
* engineer the autonomy system to efficiently handle Mars' unpredictable terrain, improving mission success rates.
* incorporate mechanisms for self-diagnosis and repair to ensure long-term functionality with minimal Earth-based support.
[[File:Screenshot12-58-42.png|800px|thumb|none|Summary]]
=== Risk-aware planning ===
* utilize Boole's inequality for risk allocation.
* enhancing decision-making under uncertainty.
* employ predictive analytics to anticipate and mitigate potential risks.
* allowing for dynamic re-planning and risk management.
=== Advanced sensor integration ===
* high-resolution cameras and spectrometers.
* integrate sensors for comprehensive environmental data.
* using ground-penetrating radar (GPR).
=== Energy ===
* optimize energy consumption.
* utilize a combination of solar panels.
* gather power from the environment, such as thermal gradients and mechanical movements.
=== Navigation and mapping ===
* utilize LIDAR
* optimal path planning,.
* integrate real-time obstacle detection
6416903142cc40bb05633038143b837e8330b3fb
1143
1142
2024-05-20T20:15:05Z
Vrtnis
21
wikitext
text/x-wiki
[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)
=== Mars rover autonomy ===
* combine approximate kinematic settling with a dual-cost path planner to navigate safely in unstructured environments.
* develop multi-agent capabilities to significantly enhance performance and safety of autonomous operations on Mars.
* engineer the autonomy system to efficiently handle Mars' unpredictable terrain, improving mission success rates.
* incorporate mechanisms for self-diagnosis and repair to ensure long-term functionality with minimal Earth-based support.
[[File:Screenshot12-58-42.png|800px|thumb|none|Summary]]
=== Risk-aware planning ===
* utilize Boole's inequality for risk allocation.
* enhancing decision-making under uncertainty.
* employ predictive analytics to anticipate and mitigate potential risks.
* allowing for dynamic re-planning and risk management.
=== Advanced sensor integration ===
* high-resolution cameras and spectrometers.
* integrate sensors for comprehensive environmental data.
* using ground-penetrating radar (GPR).
=== Energy ===
* optimize energy consumption.
* utilize a combination of solar panels.
* gather power from the environment, such as thermal gradients and mechanical movements.
=== Navigation and mapping ===
* utilize LIDAR
* optimal path planning,.
* integrate real-time obstacle detection
6df279475a4a24a8d342ac425f97b1dc4f4eafb5
1145
1143
2024-05-20T20:17:39Z
Vrtnis
21
wikitext
text/x-wiki
[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)
=== Mars rover autonomy ===
* combine approximate kinematic settling with a dual-cost path planner to navigate safely in unstructured environments.
* develop multi-agent capabilities to significantly enhance performance and safety of autonomous operations on Mars.
* engineer the autonomy system to efficiently handle Mars' unpredictable terrain, improving mission success rates.
* incorporate mechanisms for self-diagnosis and repair to ensure long-term functionality with minimal Earth-based support.
[[File:Screenshot12-58-42.png|800px|thumb|none|Summary]]
=== Risk-aware planning ===
* utilize Boole's inequality for risk allocation.
* enhancing decision-making under uncertainty.
* employ predictive analytics to anticipate and mitigate potential risks.
* allowing for dynamic re-planning and risk management.
[[File:Screenshot13-16-39.png|400px|thumb|none]]
=== Advanced sensor integration ===
* high-resolution cameras and spectrometers.
* integrate sensors for comprehensive environmental data.
* using ground-penetrating radar (GPR).
=== Energy ===
* optimize energy consumption.
* utilize a combination of solar panels.
* gather power from the environment, such as thermal gradients and mechanical movements.
=== Navigation and mapping ===
* utilize LIDAR
* optimal path planning,.
* integrate real-time obstacle detection
db31e3e81a07dad370ca441349575bb9f2fff97d
1146
1145
2024-05-20T20:28:58Z
Vrtnis
21
/*project management lessons*/
wikitext
text/x-wiki
[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)
=== Mars rover autonomy ===
* combine approximate kinematic settling with a dual-cost path planner to navigate safely in unstructured environments.
* develop multi-agent capabilities to significantly enhance performance and safety of autonomous operations on Mars.
* engineer the autonomy system to efficiently handle Mars' unpredictable terrain, improving mission success rates.
* incorporate mechanisms for self-diagnosis and repair to ensure long-term functionality with minimal Earth-based support.
[[File:Screenshot12-58-42.png|800px|thumb|none|Summary]]
=== Risk-aware planning ===
* utilize Boole's inequality for risk allocation.
* enhancing decision-making under uncertainty.
* employ predictive analytics to anticipate and mitigate potential risks.
* allowing for dynamic re-planning and risk management.
[[File:Screenshot13-16-39.png|400px|thumb|none]]
=== Advanced sensor integration ===
* high-resolution cameras and spectrometers.
* integrate sensors for comprehensive environmental data.
* using ground-penetrating radar (GPR).
=== Energy ===
* optimize energy consumption.
* utilize a combination of solar panels.
* gather power from the environment, such as thermal gradients and mechanical movements.
=== Navigation and mapping ===
* utilize LIDAR
* optimal path planning,.
* integrate real-time obstacle detection
=== Project managementlessons learned ===
* manage complex robotics projects like eels and Mars rovers by coordinating multiple teams
* advantage of having all team members colocated in the same building at the project's inception.
* more integrated workflows through initial colocation
* ensure alignment on project goals and timelines.
* early stages of complex projects benefit greatly from in-person collaboration. establish a strong foundation through colocation to enhance subsequent remote coordination efforts
35b7b6bc836fbed3ab7060745e0d4aa10bc587e0
1147
1146
2024-05-20T20:29:47Z
Vrtnis
21
/* Mars rover autonomy */
wikitext
text/x-wiki
[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)
=== Mars rover autonomy ===
* combine approximate kinematic settling with a dual-cost path planner.
* develop multi-agent capabilities to significantly enhance performance and safety of autonomous operations on Mars.
* autonomy system to efficiently handle Mars' unpredictable terrain, improving mission success rates.
* incorporate mechanisms for self-diagnosis and repair to ensure long-term functionality with minimal Earth-based support.
[[File:Screenshot12-58-42.png|800px|thumb|none|Summary]]
=== Risk-aware planning ===
* utilize Boole's inequality for risk allocation.
* enhancing decision-making under uncertainty.
* employ predictive analytics to anticipate and mitigate potential risks.
* allowing for dynamic re-planning and risk management.
[[File:Screenshot13-16-39.png|400px|thumb|none]]
=== Advanced sensor integration ===
* high-resolution cameras and spectrometers.
* integrate sensors for comprehensive environmental data.
* using ground-penetrating radar (GPR).
=== Energy ===
* optimize energy consumption.
* utilize a combination of solar panels.
* gather power from the environment, such as thermal gradients and mechanical movements.
=== Navigation and mapping ===
* utilize LIDAR
* optimal path planning,.
* integrate real-time obstacle detection
=== Project managementlessons learned ===
* manage complex robotics projects like eels and Mars rovers by coordinating multiple teams
* advantage of having all team members colocated in the same building at the project's inception.
* more integrated workflows through initial colocation
* ensure alignment on project goals and timelines.
* early stages of complex projects benefit greatly from in-person collaboration. establish a strong foundation through colocation to enhance subsequent remote coordination efforts
980473f281ea1e03d1990fd9361cbf49970657fe
1148
1147
2024-05-20T20:30:06Z
Vrtnis
21
/* Advanced sensor integration */
wikitext
text/x-wiki
[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)
=== Mars rover autonomy ===
* combine approximate kinematic settling with a dual-cost path planner.
* develop multi-agent capabilities to significantly enhance performance and safety of autonomous operations on Mars.
* autonomy system to efficiently handle Mars' unpredictable terrain, improving mission success rates.
* incorporate mechanisms for self-diagnosis and repair to ensure long-term functionality with minimal Earth-based support.
[[File:Screenshot12-58-42.png|800px|thumb|none|Summary]]
=== Risk-aware planning ===
* utilize Boole's inequality for risk allocation.
* enhancing decision-making under uncertainty.
* employ predictive analytics to anticipate and mitigate potential risks.
* allowing for dynamic re-planning and risk management.
[[File:Screenshot13-16-39.png|400px|thumb|none]]
=== Advanced sensor integration ===
* high-resolution cameras and spectrometers
* integrate sensors for comprehensive environmental data.
* using ground-penetrating radar (GPR).
=== Energy ===
* optimize energy consumption.
* utilize a combination of solar panels.
* gather power from the environment, such as thermal gradients and mechanical movements.
=== Navigation and mapping ===
* utilize LIDAR
* optimal path planning,.
* integrate real-time obstacle detection
=== Project managementlessons learned ===
* manage complex robotics projects like eels and Mars rovers by coordinating multiple teams
* advantage of having all team members colocated in the same building at the project's inception.
* more integrated workflows through initial colocation
* ensure alignment on project goals and timelines.
* early stages of complex projects benefit greatly from in-person collaboration. establish a strong foundation through colocation to enhance subsequent remote coordination efforts
261c544790f9d7c655e8d5b1634f58f4b5d3182e
1149
1148
2024-05-20T22:16:47Z
Vrtnis
21
/* Navigation and mapping */
wikitext
text/x-wiki
[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)
=== Mars rover autonomy ===
* combine approximate kinematic settling with a dual-cost path planner.
* develop multi-agent capabilities to significantly enhance performance and safety of autonomous operations on Mars.
* autonomy system to efficiently handle Mars' unpredictable terrain, improving mission success rates.
* incorporate mechanisms for self-diagnosis and repair to ensure long-term functionality with minimal Earth-based support.
[[File:Screenshot12-58-42.png|800px|thumb|none|Summary]]
=== Risk-aware planning ===
* utilize Boole's inequality for risk allocation.
* enhancing decision-making under uncertainty.
* employ predictive analytics to anticipate and mitigate potential risks.
* allowing for dynamic re-planning and risk management.
[[File:Screenshot13-16-39.png|400px|thumb|none]]
=== Advanced sensor integration ===
* high-resolution cameras and spectrometers
* integrate sensors for comprehensive environmental data.
* using ground-penetrating radar (GPR).
=== Energy ===
* optimize energy consumption.
* utilize a combination of solar panels.
* gather power from the environment, such as thermal gradients and mechanical movements.
=== Navigation and mapping ===
* utilize LIDAR
* optimal path planning
* integrate real-time obstacle detection
=== Project managementlessons learned ===
* manage complex robotics projects like eels and Mars rovers by coordinating multiple teams
* advantage of having all team members colocated in the same building at the project's inception.
* more integrated workflows through initial colocation
* ensure alignment on project goals and timelines.
* early stages of complex projects benefit greatly from in-person collaboration. establish a strong foundation through colocation to enhance subsequent remote coordination efforts
14cc13a42efeb0e96cc04f527c3953f911befd4e
1150
1149
2024-05-20T22:19:40Z
Vrtnis
21
/* Project managementlessons learned */
wikitext
text/x-wiki
[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)
=== Mars rover autonomy ===
* combine approximate kinematic settling with a dual-cost path planner.
* develop multi-agent capabilities to significantly enhance performance and safety of autonomous operations on Mars.
* autonomy system to efficiently handle Mars' unpredictable terrain, improving mission success rates.
* incorporate mechanisms for self-diagnosis and repair to ensure long-term functionality with minimal Earth-based support.
[[File:Screenshot12-58-42.png|800px|thumb|none|Summary]]
=== Risk-aware planning ===
* utilize Boole's inequality for risk allocation.
* enhancing decision-making under uncertainty.
* employ predictive analytics to anticipate and mitigate potential risks.
* allowing for dynamic re-planning and risk management.
[[File:Screenshot13-16-39.png|400px|thumb|none]]
=== Advanced sensor integration ===
* high-resolution cameras and spectrometers
* integrate sensors for comprehensive environmental data.
* using ground-penetrating radar (GPR).
=== Energy ===
* optimize energy consumption.
* utilize a combination of solar panels.
* gather power from the environment, such as thermal gradients and mechanical movements.
=== Navigation and mapping ===
* utilize LIDAR
* optimal path planning
* integrate real-time obstacle detection
=== Project management lessons learned ===
* manage complex robotics projects like eels and Mars rovers by coordinating multiple teams
* advantage of having all team members colocated in the same building at the project's inception.
* more integrated workflows through initial colocation
* ensure alignment on project goals and timelines.
* early stages of complex projects benefit greatly from in-person collaboration. establish a strong foundation through colocation to enhance subsequent remote coordination efforts
a816bd3bc6d369127f4dde4a31c8502e717fd713
1151
1150
2024-05-20T22:25:04Z
Vrtnis
21
Vrtnis moved page [[JPL Robotics Lessons Learnt]] to [[JPL Robotics Meeting Notes]]: Correct title
wikitext
text/x-wiki
[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)
=== Mars rover autonomy ===
* combine approximate kinematic settling with a dual-cost path planner.
* develop multi-agent capabilities to significantly enhance performance and safety of autonomous operations on Mars.
* autonomy system to efficiently handle Mars' unpredictable terrain, improving mission success rates.
* incorporate mechanisms for self-diagnosis and repair to ensure long-term functionality with minimal Earth-based support.
[[File:Screenshot12-58-42.png|800px|thumb|none|Summary]]
=== Risk-aware planning ===
* utilize Boole's inequality for risk allocation.
* enhancing decision-making under uncertainty.
* employ predictive analytics to anticipate and mitigate potential risks.
* allowing for dynamic re-planning and risk management.
[[File:Screenshot13-16-39.png|400px|thumb|none]]
=== Advanced sensor integration ===
* high-resolution cameras and spectrometers
* integrate sensors for comprehensive environmental data.
* using ground-penetrating radar (GPR).
=== Energy ===
* optimize energy consumption.
* utilize a combination of solar panels.
* gather power from the environment, such as thermal gradients and mechanical movements.
=== Navigation and mapping ===
* utilize LIDAR
* optimal path planning
* integrate real-time obstacle detection
=== Project management lessons learned ===
* manage complex robotics projects like eels and Mars rovers by coordinating multiple teams
* advantage of having all team members colocated in the same building at the project's inception.
* more integrated workflows through initial colocation
* ensure alignment on project goals and timelines.
* early stages of complex projects benefit greatly from in-person collaboration. establish a strong foundation through colocation to enhance subsequent remote coordination efforts
a816bd3bc6d369127f4dde4a31c8502e717fd713
1153
1151
2024-05-20T22:48:32Z
Vrtnis
21
wikitext
text/x-wiki
[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)
=== Mars rover autonomy ===
* combine approximate kinematic settling with a dual-cost path planner.
* develop multi-agent capabilities to significantly enhance performance and safety of autonomous operations on Mars.
* autonomy system to efficiently handle Mars' unpredictable terrain, improving mission success rates.
* incorporate mechanisms for self-diagnosis and repair to ensure long-term functionality with minimal Earth-based support.
[[File:Screenshot12-58-42.png|800px|thumb|none|Summary]]
=== Risk-aware planning ===
* utilize Boole's inequality for risk allocation.
* enhancing decision-making under uncertainty.
* employ predictive analytics to anticipate and mitigate potential risks.
* allowing for dynamic re-planning and risk management.
[[File:Screenshot13-16-39.png|400px|thumb|none]]
=== Advanced sensor integration ===
* high-resolution cameras and spectrometers
* integrate sensors for comprehensive environmental data.
* using ground-penetrating radar (GPR).
=== Humanoids ===
* for space exploration, utilizing human-like structure
* tasks requiring precise human-like movements.
* deploy humanoids for maintenance, repair enhancing astronaut safety.
* use in sample gathering/ environmental monitoring.
* simulate operations in space-like environments.
=== Energy ===
* optimize energy consumption.
* utilize a combination of solar panels.
* gather power from the environment, such as thermal gradients and mechanical movements.
=== Navigation and mapping ===
* utilize LIDAR
* optimal path planning
* integrate real-time obstacle detection
=== Project management lessons learned ===
* manage complex robotics projects like eels and Mars rovers by coordinating multiple teams
* advantage of having all team members colocated in the same building at the project's inception.
* more integrated workflows through initial colocation
* ensure alignment on project goals and timelines.
* early stages of complex projects benefit greatly from in-person collaboration. establish a strong foundation through colocation to enhance subsequent remote coordination efforts
5b904356b71b96e1297afb2bbcbd6d4a2f94b89a
1154
1153
2024-05-20T22:48:49Z
Vrtnis
21
wikitext
text/x-wiki
[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)
=== Mars rover autonomy ===
* combine approximate kinematic settling with a dual-cost path planner.
* develop multi-agent capabilities to significantly enhance performance and safety of autonomous operations on Mars.
* autonomy system to efficiently handle Mars' unpredictable terrain, improving mission success rates.
* incorporate mechanisms for self-diagnosis and repair to ensure long-term functionality with minimal Earth-based support.
[[File:Screenshot12-58-42.png|800px|thumb|none|Summary]]
=== Risk-aware planning ===
* utilize Boole's inequality for risk allocation.
* enhancing decision-making under uncertainty.
* employ predictive analytics to anticipate and mitigate potential risks.
* allowing for dynamic re-planning and risk management.
[[File:Screenshot13-16-39.png|400px|thumb|none]]
=== Advanced sensor integration ===
* high-resolution cameras and spectrometers
* integrate sensors for comprehensive environmental data.
* using ground-penetrating radar (GPR).
=== Humanoids ===
* for space exploration, utilizing human-like structure
* tasks requiring precise human-like movements.
* deploy humanoids for maintenance, repair enhancing astronaut safety.
* use in sample gathering/ environmental monitoring.
* simulate operations in space-like environments.
=== Energy ===
* optimize energy consumption.
* utilize a combination of solar panels.
* gather power from the environment, such as thermal gradients and mechanical movements.
=== Navigation and mapping ===
* utilize LIDAR
* optimal path planning
* integrate real-time obstacle detection
=== Project management lessons learned ===
* manage complex robotics projects like eels and Mars rovers by coordinating multiple teams
* advantage of having all team members colocated in the same building at the project's inception.
* more integrated workflows through initial colocation
* ensure alignment on project goals and timelines.
* early stages of complex projects benefit greatly from in-person collaboration. establish a strong foundation through colocation to enhance subsequent remote coordination efforts
cfb48c0d66d26caeea9b7e850e829761c1cc1d2d
1155
1154
2024-05-20T22:50:33Z
Vrtnis
21
wikitext
text/x-wiki
[[User:vrtnis|User:vrtnis]]' notes from meeting with JPL robotics head of robotics team (EELS and Rover)
=== Mars rover autonomy ===
* combine approximate kinematic settling with a dual-cost path planner.
* develop multi-agent capabilities to significantly enhance performance - safety of autonomous operations on Mars.
* autonomy system to efficiently handle Mars' unpredictable terrain - improving mission success rates.
* incorporate mechanisms for self-diagnosis and repair to ensure long-term functionality with minimal Earth-based support.
[[File:Screenshot12-58-42.png|800px|thumb|none|Summary]]
=== Risk-aware planning ===
* utilize Boole's inequality for risk allocation.
* enhancing decision-making under uncertainty.
* employ predictive analytics to anticipate and mitigate potential risks.
* allowing for dynamic re-planning and risk management.
[[File:Screenshot13-16-39.png|400px|thumb|none]]
=== Advanced sensor integration ===
* high-resolution cameras and spectrometers
* integrate sensors for comprehensive environmental data.
* using ground-penetrating radar (GPR).
=== Humanoids ===
* for space exploration, utilizing human-like structure
* tasks requiring precise human-like movements.
* deploy humanoids for maintenance, repair enhancing astronaut safety.
* use in sample gathering/ environmental monitoring.
* simulate operations in space-like environments.
=== Energy ===
* optimize energy consumption.
* utilize a combination of solar panels.
* gather power from the environment, such as thermal gradients and mechanical movements.
=== Navigation and mapping ===
* utilize LIDAR
* optimal path planning
* integrate real-time obstacle detection
=== Project management lessons learned ===
* manage complex robotics projects like eels and Mars rovers by coordinating multiple teams
* advantage of having all team members colocated in the same building at the project's inception.
* more integrated workflows through initial colocation
* ensure alignment on project goals and timelines.
* early stages of complex projects benefit greatly from in-person collaboration. establish a strong foundation through colocation to enhance subsequent remote coordination efforts
e38a80c98642676587fae063a2f22deb53ac33ff
File:Screenshot12-58-42.png
6
267
1138
2024-05-20T19:59:49Z
Vrtnis
21
wikitext
text/x-wiki
/*summary slide*/
993a32dde4daa7fad481a7b8654815a5ccf40796
File:Screenshot13-16-39.png
6
268
1144
2024-05-20T20:17:18Z
Vrtnis
21
wikitext
text/x-wiki
path planning slide
c8c623e5024ac420ca9311b91cda1d348e931b9e
JPL Robotics Lessons Learnt
0
269
1152
2024-05-20T22:25:04Z
Vrtnis
21
Vrtnis moved page [[JPL Robotics Lessons Learnt]] to [[JPL Robotics Meeting Notes]]: Correct title
wikitext
text/x-wiki
#REDIRECT [[JPL Robotics Meeting Notes]]
cd85744a4ee3eb5a360188d3ce1f77c9abae7e06
Allen's Reinforcement Learning Notes
0
270
1156
2024-05-21T03:10:17Z
Ben
2
Created page with "Allen's reinforcement learning notes === Links === * [https://www.youtube.com/watch?v=SupFHGbytvA&list=PL_iWQOsE6TfVYGEGiAOMaOzzv41Jfm_Ps Sergey Levine RL Lecture]"
wikitext
text/x-wiki
Allen's reinforcement learning notes
=== Links ===
* [https://www.youtube.com/watch?v=SupFHGbytvA&list=PL_iWQOsE6TfVYGEGiAOMaOzzv41Jfm_Ps Sergey Levine RL Lecture]
1723d954a8b80f71a0963544656871c99747c3d7
1157
1156
2024-05-21T03:10:32Z
Ben
2
wikitext
text/x-wiki
Allen's reinforcement learning notes
=== Links ===
* [https://www.youtube.com/watch?v=SupFHGbytvA&list=PL_iWQOsE6TfVYGEGiAOMaOzzv41Jfm_Ps Sergey Levine RL Lecture]
[[Category:Reinforcement Learning]]
d248ef20442690b683cc0f041482e78b8190032b
Reinforcement Learning
0
34
1158
1056
2024-05-21T03:11:00Z
Ben
2
wikitext
text/x-wiki
This guide is incomplete and a work in progress; you can help by expanding it!
== Reinforcement Learning (RL) ==
Reinforcement Learning (RL) is a machine learning approach where an agent learns to perform tasks by interacting with an environment. It involves the agent receiving rewards or penalties based on its actions and using this feedback to improve its performance over time. RL is particularly useful in robotics for training robots to perform complex tasks autonomously. Here's how RL is applied in robotics, using simulation environments like Isaac Sim and MuJoCo:
== RL in Robotics ==
=== Practical Applications of RL ===
==== Task Automation ====
* Robots can be trained to perform repetitive or dangerous tasks autonomously, such as assembly line work, welding, or hazardous material handling.
* RL enables robots to adapt to new tasks without extensive reprogramming, making them versatile for various industrial applications.
==== Navigation and Manipulation ====
* RL is used to train robots for navigating complex environments and manipulating objects with precision, which is crucial for tasks like warehouse logistics, domestic chores, and medical surgeries.
=== Simulation Environments ===
==== Isaac Sim ====
* Isaac Sim provides a highly realistic and interactive environment where robots can be trained safely and efficiently.
* The simulated environment includes physics, sensors, and other elements that mimic real-world conditions, enabling the transfer of learned behaviors to physical robots.
==== MuJoCo ====
* MuJoCo (Multi-Joint dynamics with Contact) is a physics engine designed for research and development in robotics, machine learning, and biomechanics.
* It offers fast and accurate simulations, which are essential for training RL agents in tasks involving complex dynamics and contact-rich interactions.
== Training algorithms ==
* [https://en.wikipedia.org/wiki/Advantage_Actor_Critic A2C]
* [https://en.wikipedia.org/wiki/Proximal_policy_optimization PPO]
* [https://spinningup.openai.com/en/latest/algorithms/sac.html SAC]
== Resources ==
* [https://mandi-zhao.gitbook.io/deeprl-notes Mandy Zhao's Reinforcement Learning Notes]
* [https://cs224r.stanford.edu/slides/cs224r-actor-critic-split.pdf Stanford CS224R Actor Critic Slides]
[[Category: Software]]
[[Category: Reinforcement Learning]]
e4c57d8dc0b824b4ad78aa7b0939188a09f57d8c
Allen's Reinforcement Learning Notes
0
270
1159
1157
2024-05-21T03:11:49Z
Ben
2
wikitext
text/x-wiki
Allen's reinforcement learning notes
=== Links ===
* [https://rail.eecs.berkeley.edu/deeprlcourse-fa19/ Berkeley CS285]
* [https://www.youtube.com/watch?v=SupFHGbytvA&list=PL_iWQOsE6TfVYGEGiAOMaOzzv41Jfm_Ps Sergey Levine RL Lecture]
[[Category:Reinforcement Learning]]
5abb12a2472d8018339336e360522562fc482859
1161
1159
2024-05-21T05:43:47Z
Allen12
15
wikitext
text/x-wiki
Allen's reinforcement learning notes
=== Links ===
* [https://rail.eecs.berkeley.edu/deeprlcourse-fa19/ Berkeley CS285]
* [https://www.youtube.com/watch?v=SupFHGbytvA&list=PL_iWQOsE6TfVYGEGiAOMaOzzv41Jfm_Ps Sergey Levine RL Lecture]
[[Category:Reinforcement Learning]]
=== Motivation ===
Consider a problem where we have to train a robot to pick up some object. A traditional ML algorithm might try to learn some function f(x) = y, where given some position x observed via the camera we output some behavior y. The trouble is that in the real world, the correct grab location is some function of the object and the physical environment, which is hard to intuitively ascertain by observation.
The motivation behind reinforcement learning is to repeatedly take observations, then sample the effects of actions on those observations (reward and new observation/state). Ultimately, we hope to create a policy pi that maps states or observations to actions.
=== Learning ===
Learning involves the agent taking actions and the environment returning a new state and reward.
* Input: <math>s_t</math>: States at each time step
* Output: <math>a_t</math>: Actions at each time step
* Data: <math>(s_1, a_1, r_1, ... , s_T, a_T, r_T)</math>
* Learn <math>\pi_\theta : s_t -> a_t <\math> to maximize <math> \sum_t r_t <\math>
=== State vs. Observation ===
A state is a complete representation of the physical world while the observation is some subset or representation of s. They are not necessarily the same in that we can't always infer s_t from o_t, but o_t is inferable from s_t. To think of it as a bayes net, we have
* s_1 -> o_1 - (pi_theta) -> a_1 (policy)
* s_1, a_1 - (p(s_{t+1} | s_t, a_t) -> s_2 (dynamics)
Note that theta represents the parameters of the policy (for example, the parameters of a neural network). Assumption: Markov Property - Future states are independent of past states given present states. This is the fundamental difference between states and observations.
=== Problem Representation ===
States and actions are typically continuous - thus, we often want to model our output policy as a density function, which tells us the distribution of probabilities of actions at some given state.
The reward is a function of the state and action r(s, a) -> int, which tells us what states and actions are better. When choosing hyperparameters we need to be careful to make sure that we go for completing long term goals instead of always looking for immediate reward.
== Markov Chain & Decision Process==
Markov Chain: <math> M = {S, T} <\math>, where S - state space, T- transition operator. The state space is the set of all states, and can be discrete or continuous. The transition probabilities is represented in a matrix, where the i,j'th entry is the probability of going into state i at state j, and we can express the next time step by multiplying the current time step with the transition operator.
Markov Decision Process: <math> M = {S, A, T, r} <\math>, where A - action space. T is now a tensor, containing the current state, current action, and next state. We let T_{i, j, k} = p(s_t + 1 = i | s_t = j, a_t = k). r is the reward function.
=== Reinforcement Learning Algorithms - High-level ===
# Generate Samples (run policy)
# Fit a model/estimate something about how well policy is performing
# Improve policy
# Repeat
=== Temporal Difference Learning ===
Temporal Difference (TD) is a model for estimating the utility of states given some state-action-outcome information. Suppose we have some initial value <math>V_0(s) </math>, and we get some information <math> (s, a, s', r(s, a) </math>. We can then use the update equation <math>V_{t+1}(s) = (1- \alpha)V_{t}(s)+\alpha(R(s, a, s') + \gamma V_i(s')) </math>. Here \alpha represents the learning rate, which is how much new information is weighted relative to old information, while \gamma represents the discount factor, which can be thought of how much getting a reward in the future factors into our current reward.
=== Q Learning ===
Q Learning gives us a way to extract the optimal policy after learning. --
feb3c2c25a2c337385632f5c58f13996cfa33a84
1162
1161
2024-05-21T05:44:08Z
Allen12
15
/* Markov Chain & Decision Process */
wikitext
text/x-wiki
Allen's reinforcement learning notes
=== Links ===
* [https://rail.eecs.berkeley.edu/deeprlcourse-fa19/ Berkeley CS285]
* [https://www.youtube.com/watch?v=SupFHGbytvA&list=PL_iWQOsE6TfVYGEGiAOMaOzzv41Jfm_Ps Sergey Levine RL Lecture]
[[Category:Reinforcement Learning]]
=== Motivation ===
Consider a problem where we have to train a robot to pick up some object. A traditional ML algorithm might try to learn some function f(x) = y, where given some position x observed via the camera we output some behavior y. The trouble is that in the real world, the correct grab location is some function of the object and the physical environment, which is hard to intuitively ascertain by observation.
The motivation behind reinforcement learning is to repeatedly take observations, then sample the effects of actions on those observations (reward and new observation/state). Ultimately, we hope to create a policy pi that maps states or observations to actions.
=== Learning ===
Learning involves the agent taking actions and the environment returning a new state and reward.
* Input: <math>s_t</math>: States at each time step
* Output: <math>a_t</math>: Actions at each time step
* Data: <math>(s_1, a_1, r_1, ... , s_T, a_T, r_T)</math>
* Learn <math>\pi_\theta : s_t -> a_t <\math> to maximize <math> \sum_t r_t <\math>
=== State vs. Observation ===
A state is a complete representation of the physical world while the observation is some subset or representation of s. They are not necessarily the same in that we can't always infer s_t from o_t, but o_t is inferable from s_t. To think of it as a bayes net, we have
* s_1 -> o_1 - (pi_theta) -> a_1 (policy)
* s_1, a_1 - (p(s_{t+1} | s_t, a_t) -> s_2 (dynamics)
Note that theta represents the parameters of the policy (for example, the parameters of a neural network). Assumption: Markov Property - Future states are independent of past states given present states. This is the fundamental difference between states and observations.
=== Problem Representation ===
States and actions are typically continuous - thus, we often want to model our output policy as a density function, which tells us the distribution of probabilities of actions at some given state.
The reward is a function of the state and action r(s, a) -> int, which tells us what states and actions are better. When choosing hyperparameters we need to be careful to make sure that we go for completing long term goals instead of always looking for immediate reward.
=== Markov Chain & Decision Process===
Markov Chain: <math> M = {S, T} <\math>, where S - state space, T- transition operator. The state space is the set of all states, and can be discrete or continuous. The transition probabilities is represented in a matrix, where the i,j'th entry is the probability of going into state i at state j, and we can express the next time step by multiplying the current time step with the transition operator.
Markov Decision Process: <math> M = {S, A, T, r} <\math>, where A - action space. T is now a tensor, containing the current state, current action, and next state. We let T_{i, j, k} = p(s_t + 1 = i | s_t = j, a_t = k). r is the reward function.
=== Reinforcement Learning Algorithms - High-level ===
# Generate Samples (run policy)
# Fit a model/estimate something about how well policy is performing
# Improve policy
# Repeat
=== Temporal Difference Learning ===
Temporal Difference (TD) is a model for estimating the utility of states given some state-action-outcome information. Suppose we have some initial value <math>V_0(s) </math>, and we get some information <math> (s, a, s', r(s, a) </math>. We can then use the update equation <math>V_{t+1}(s) = (1- \alpha)V_{t}(s)+\alpha(R(s, a, s') + \gamma V_i(s')) </math>. Here \alpha represents the learning rate, which is how much new information is weighted relative to old information, while \gamma represents the discount factor, which can be thought of how much getting a reward in the future factors into our current reward.
=== Q Learning ===
Q Learning gives us a way to extract the optimal policy after learning. --
ece368361a6f395de83df2b2a0e062adf95ac799
1163
1162
2024-05-21T05:56:23Z
Allen12
15
wikitext
text/x-wiki
Allen's reinforcement learning notes
=== Links ===
* [https://rail.eecs.berkeley.edu/deeprlcourse-fa19/ Berkeley CS285]
* [https://www.youtube.com/watch?v=SupFHGbytvA&list=PL_iWQOsE6TfVYGEGiAOMaOzzv41Jfm_Ps Sergey Levine RL Lecture]
[[Category:Reinforcement Learning]]
=== Motivation ===
Consider a problem where we have to train a robot to pick up some object. A traditional ML algorithm might try to learn some function f(x) = y, where given some position x observed via the camera we output some behavior y. The trouble is that in the real world, the correct grab location is some function of the object and the physical environment, which is hard to intuitively ascertain by observation.
The motivation behind reinforcement learning is to repeatedly take observations, then sample the effects of actions on those observations (reward and new observation/state). Ultimately, we hope to create a policy pi that maps states or observations to actions.
=== Learning ===
Learning involves the agent taking actions and the environment returning a new state and reward.
* Input: <math>s_t</math>: States at each time step
* Output: <math>a_t</math>: Actions at each time step
* Data: <math>(s_1, a_1, r_1, ... , s_T, a_T, r_T)</math>
* Learn <math>\pi_\theta : s_t -> a_t <\math> to maximize <math> \sum_t r_t <\math>
=== State vs. Observation ===
A state is a complete representation of the physical world while the observation is some subset or representation of s. They are not necessarily the same in that we can't always infer s_t from o_t, but o_t is inferable from s_t. To think of it as a bayes net, we have
* s_1 -> o_1 - (pi_theta) -> a_1 (policy)
* s_1, a_1 - (p(s_{t+1} | s_t, a_t) -> s_2 (dynamics)
Note that theta represents the parameters of the policy (for example, the parameters of a neural network). Assumption: Markov Property - Future states are independent of past states given present states. This is the fundamental difference between states and observations.
=== Problem Representation ===
States and actions are typically continuous - thus, we often want to model our output policy as a density function, which tells us the distribution of probabilities of actions at some given state.
The reward is a function of the state and action r(s, a) -> int, which tells us what states and actions are better. When choosing hyperparameters we need to be careful to make sure that we go for completing long term goals instead of always looking for immediate reward.
=== Markov Chain & Decision Process===
Markov Chain: <math> M = {S, T} <\math>, where S - state space, T- transition operator. The state space is the set of all states, and can be discrete or continuous. The transition probabilities is represented in a matrix, where the i,j'th entry is the probability of going into state i at state j, and we can express the next time step by multiplying the current time step with the transition operator.
Markov Decision Process: <math> M = {S, A, T, r} <\math>, where A - action space. T is now a tensor, containing the current state, current action, and next state. We let T_{i, j, k} = p(s_t + 1 = i | s_t = j, a_t = k). r is the reward function.
=== Reinforcement Learning Algorithms - High-level ===
# Generate Samples (run policy)
# Fit a model/estimate something about how well policy is performing
# Improve policy
# Repeat
=== Temporal Difference Learning ===
Temporal Difference (TD) is a model for estimating the utility of states given some state-action-outcome information. Suppose we have some initial value <math>V_0(s) </math>, and we get some information <math> (s, a, s', r(s, a) </math>. We can then use the update equation <math>V_{t+1}(s) = (1- \alpha)V_{t}(s)+\alpha(R(s, a, s') + \gamma V_i(s')) </math>. Here \alpha represents the learning rate, which is how much new information is weighted relative to old information, while \gamma represents the discount factor, which can be thought of how much getting a reward in the future factors into our current reward.
=== Q Learning ===
Q Learning gives us a way to extract the optimal policy after learning. Instead of keeping track of the values of individual states, we keep track of Q values for state-action pairs, representing the utility of taking action a at state s.
How do we use this Q value? Two main ideas.
Idea 1: Policy iteration - if we have a policy <math> \pi </math> and we know <math> Q^pi (s, a) </math>, we can improve the policy, by deterministically setting the action at each state be the argmax of all possible actions at the state.
Idea 2: Gradient update - If <math> Q^pi(s, a) > V^pi(s) </math>, then a is better than average. We will then modify the policy to increase the probability of a.
b42a20c6f2a6141bf1b7a0eaac7649937dc504d7
1164
1163
2024-05-21T15:47:13Z
108.211.178.220
0
wikitext
text/x-wiki
Allen's reinforcement learning notes
=== Links ===
* [https://rail.eecs.berkeley.edu/deeprlcourse-fa19/ Berkeley CS285]
* [https://www.youtube.com/watch?v=SupFHGbytvA&list=PL_iWQOsE6TfVYGEGiAOMaOzzv41Jfm_Ps Sergey Levine RL Lecture]
[[Category:Reinforcement Learning]]
=== Motivation ===
Consider a problem where we have to train a robot to pick up some object. A traditional ML algorithm might try to learn some function f(x) = y, where given some position x observed via the camera we output some behavior y. The trouble is that in the real world, the correct grab location is some function of the object and the physical environment, which is hard to intuitively ascertain by observation.
The motivation behind reinforcement learning is to repeatedly take observations, then sample the effects of actions on those observations (reward and new observation/state). Ultimately, we hope to create a policy pi that maps states or observations to actions.
=== Learning ===
Learning involves the agent taking actions and the environment returning a new state and reward.
* Input: <math>s_t</math>: States at each time step
* Output: <math>a_t</math>: Actions at each time step
* Data: <math>(s_1, a_1, r_1, ... , s_T, a_T, r_T)</math>
* Learn <math>\pi_\theta : s_t -> a_t </math> to maximize <math> \sum_t r_t </math>
=== State vs. Observation ===
A state is a complete representation of the physical world while the observation is some subset or representation of s. They are not necessarily the same in that we can't always infer s_t from o_t, but o_t is inferable from s_t. To think of it as a network of conditional probability, we have
* <math> s_1 -> o_1 - (pi_theta) -> a_1 </math> (policy)
* <math> s_1, a_1 - (p(s_{t+1} | s_t, a_t) -> s_2 </math> (dynamics)
Note that theta represents the parameters of the policy (for example, the parameters of a neural network). Assumption: Markov Property - Future states are independent of past states given present states. This is the fundamental difference between states and observations.
=== Problem Representation ===
States and actions are typically continuous - thus, we often want to model our output policy as a density function, which tells us the distribution of probabilities of actions at some given state.
The reward is a function of the state and action r(s, a) -> int, which tells us what states and actions are better. We often use and tune hyperparameters for reward functions to make model training faster
=== Markov Chain & Decision Process===
Markov Chain: <math> M = {S, T} </math>, where S - state space, T- transition operator. The state space is the set of all states, and can be discrete or continuous. The transition probabilities is represented in a matrix, where the i,j'th entry is the probability of going into state i at state j, and we can express the next time step by multiplying the current time step with the transition operator.
Markov Decision Process: <math> M = {S, A, T, r} </math>, where A - action space. T is now a tensor, containing the current state, current action, and next state. We let T_{i, j, k} = p(s_t + 1 = i | s_t = j, a_t = k). r is the reward function.
=== Reinforcement Learning Algorithms - High-level ===
# Generate Samples (run policy)
# Fit a model/estimate something about how well policy is performing
# Improve policy
# Repeat
Policy Gradients - Directly differentiate objective with respect to the optimal theta and then perform gradient descent
Value-based: Estimate value function or q-function of optimal policy (policy is often represented implicitly)
Actor-Critic: Estimate value function or q-function of current policy, and find a better policy gradient
Model-based: Estimate some transition model, and then use it to improve a policy
=== REINFORCE ===
-
=== Temporal Difference Learning ===
Temporal Difference (TD) is a model for estimating the utility of states given some state-action-outcome information. Suppose we have some initial value <math>V_0(s) </math>, and we get some information <math> (s, a, s', r(s, a) </math>. We can then use the update equation <math>V_{t+1}(s) = (1- \alpha)V_{t}(s)+\alpha(R(s, a, s') + \gamma V_i(s')) </math>. Here <math>\alpha</math> represents the learning rate, which is how much new information is weighted relative to old information, while <math>\gamma</math> represents the discount factor, which can be thought of how much getting a reward in the future factors into our current reward.
=== Q Learning ===
Q Learning gives us a way to extract the optimal policy after learning. Instead of keeping track of the values of individual states, we keep track of Q values for state-action pairs, representing the utility of taking action a at state s.
How do we use this Q value? Two main ideas.
Idea 1: Policy iteration - if we have a policy <math> \pi </math> and we know <math> Q^pi (s, a) </math>, we can improve the policy, by deterministically setting the action at each state be the argmax of all possible actions at the state.
<math> Q_i+1(s,a)=(1−\alpha)Q_i(s,a)+\alpha(r(s, a)+\gammaV_i(s')) </math>
Idea 2: Gradient update - If <math> Q^pi(s, a) > V^pi(s) </math>, then a is better than average. We will then modify the policy to increase the probability of a.
1050fd88814976490830b1b30bb20856a8de0a22
1166
1164
2024-05-21T18:15:02Z
Ben
2
/* Markov Chain & Decision Process */
wikitext
text/x-wiki
Allen's reinforcement learning notes
=== Links ===
* [https://rail.eecs.berkeley.edu/deeprlcourse-fa19/ Berkeley CS285]
* [https://www.youtube.com/watch?v=SupFHGbytvA&list=PL_iWQOsE6TfVYGEGiAOMaOzzv41Jfm_Ps Sergey Levine RL Lecture]
[[Category:Reinforcement Learning]]
=== Motivation ===
Consider a problem where we have to train a robot to pick up some object. A traditional ML algorithm might try to learn some function f(x) = y, where given some position x observed via the camera we output some behavior y. The trouble is that in the real world, the correct grab location is some function of the object and the physical environment, which is hard to intuitively ascertain by observation.
The motivation behind reinforcement learning is to repeatedly take observations, then sample the effects of actions on those observations (reward and new observation/state). Ultimately, we hope to create a policy pi that maps states or observations to actions.
=== Learning ===
Learning involves the agent taking actions and the environment returning a new state and reward.
* Input: <math>s_t</math>: States at each time step
* Output: <math>a_t</math>: Actions at each time step
* Data: <math>(s_1, a_1, r_1, ... , s_T, a_T, r_T)</math>
* Learn <math>\pi_\theta : s_t -> a_t </math> to maximize <math> \sum_t r_t </math>
=== State vs. Observation ===
A state is a complete representation of the physical world while the observation is some subset or representation of s. They are not necessarily the same in that we can't always infer s_t from o_t, but o_t is inferable from s_t. To think of it as a network of conditional probability, we have
* <math> s_1 -> o_1 - (pi_theta) -> a_1 </math> (policy)
* <math> s_1, a_1 - (p(s_{t+1} | s_t, a_t) -> s_2 </math> (dynamics)
Note that theta represents the parameters of the policy (for example, the parameters of a neural network). Assumption: Markov Property - Future states are independent of past states given present states. This is the fundamental difference between states and observations.
=== Problem Representation ===
States and actions are typically continuous - thus, we often want to model our output policy as a density function, which tells us the distribution of probabilities of actions at some given state.
The reward is a function of the state and action r(s, a) -> int, which tells us what states and actions are better. We often use and tune hyperparameters for reward functions to make model training faster
=== Markov Chain & Decision Process===
Markov Chain: <math> M = {S, T} </math>, where S - state space, T- transition operator. The state space is the set of all states, and can be discrete or continuous. The transition probabilities is represented in a matrix, where the i,j'th entry is the probability of going into state i at state j, and we can express the next time step by multiplying the current time step with the transition operator.
Markov Decision Process: <math> M = {S, A, T, r} </math>, where A - action space. T is now a tensor, containing the current state, current action, and next state. We let <math> T_{i, j, k} = p(s_t + 1 = i | s_t = j, a_t = k) </math>. r is the reward function.
=== Reinforcement Learning Algorithms - High-level ===
# Generate Samples (run policy)
# Fit a model/estimate something about how well policy is performing
# Improve policy
# Repeat
Policy Gradients - Directly differentiate objective with respect to the optimal theta and then perform gradient descent
Value-based: Estimate value function or q-function of optimal policy (policy is often represented implicitly)
Actor-Critic: Estimate value function or q-function of current policy, and find a better policy gradient
Model-based: Estimate some transition model, and then use it to improve a policy
=== REINFORCE ===
-
=== Temporal Difference Learning ===
Temporal Difference (TD) is a model for estimating the utility of states given some state-action-outcome information. Suppose we have some initial value <math>V_0(s) </math>, and we get some information <math> (s, a, s', r(s, a) </math>. We can then use the update equation <math>V_{t+1}(s) = (1- \alpha)V_{t}(s)+\alpha(R(s, a, s') + \gamma V_i(s')) </math>. Here <math>\alpha</math> represents the learning rate, which is how much new information is weighted relative to old information, while <math>\gamma</math> represents the discount factor, which can be thought of how much getting a reward in the future factors into our current reward.
=== Q Learning ===
Q Learning gives us a way to extract the optimal policy after learning. Instead of keeping track of the values of individual states, we keep track of Q values for state-action pairs, representing the utility of taking action a at state s.
How do we use this Q value? Two main ideas.
Idea 1: Policy iteration - if we have a policy <math> \pi </math> and we know <math> Q^pi (s, a) </math>, we can improve the policy, by deterministically setting the action at each state be the argmax of all possible actions at the state.
<math> Q_i+1(s,a)=(1−\alpha)Q_i(s,a)+\alpha(r(s, a)+\gammaV_i(s')) </math>
Idea 2: Gradient update - If <math> Q^pi(s, a) > V^pi(s) </math>, then a is better than average. We will then modify the policy to increase the probability of a.
6982fc6458d9d8573052db52bcf3e21a10dbd8bd
1167
1166
2024-05-21T18:18:08Z
Ben
2
/* Q Learning */
wikitext
text/x-wiki
Allen's reinforcement learning notes
=== Links ===
* [https://rail.eecs.berkeley.edu/deeprlcourse-fa19/ Berkeley CS285]
* [https://www.youtube.com/watch?v=SupFHGbytvA&list=PL_iWQOsE6TfVYGEGiAOMaOzzv41Jfm_Ps Sergey Levine RL Lecture]
[[Category:Reinforcement Learning]]
=== Motivation ===
Consider a problem where we have to train a robot to pick up some object. A traditional ML algorithm might try to learn some function f(x) = y, where given some position x observed via the camera we output some behavior y. The trouble is that in the real world, the correct grab location is some function of the object and the physical environment, which is hard to intuitively ascertain by observation.
The motivation behind reinforcement learning is to repeatedly take observations, then sample the effects of actions on those observations (reward and new observation/state). Ultimately, we hope to create a policy pi that maps states or observations to actions.
=== Learning ===
Learning involves the agent taking actions and the environment returning a new state and reward.
* Input: <math>s_t</math>: States at each time step
* Output: <math>a_t</math>: Actions at each time step
* Data: <math>(s_1, a_1, r_1, ... , s_T, a_T, r_T)</math>
* Learn <math>\pi_\theta : s_t -> a_t </math> to maximize <math> \sum_t r_t </math>
=== State vs. Observation ===
A state is a complete representation of the physical world while the observation is some subset or representation of s. They are not necessarily the same in that we can't always infer s_t from o_t, but o_t is inferable from s_t. To think of it as a network of conditional probability, we have
* <math> s_1 -> o_1 - (pi_theta) -> a_1 </math> (policy)
* <math> s_1, a_1 - (p(s_{t+1} | s_t, a_t) -> s_2 </math> (dynamics)
Note that theta represents the parameters of the policy (for example, the parameters of a neural network). Assumption: Markov Property - Future states are independent of past states given present states. This is the fundamental difference between states and observations.
=== Problem Representation ===
States and actions are typically continuous - thus, we often want to model our output policy as a density function, which tells us the distribution of probabilities of actions at some given state.
The reward is a function of the state and action r(s, a) -> int, which tells us what states and actions are better. We often use and tune hyperparameters for reward functions to make model training faster
=== Markov Chain & Decision Process===
Markov Chain: <math> M = {S, T} </math>, where S - state space, T- transition operator. The state space is the set of all states, and can be discrete or continuous. The transition probabilities is represented in a matrix, where the i,j'th entry is the probability of going into state i at state j, and we can express the next time step by multiplying the current time step with the transition operator.
Markov Decision Process: <math> M = {S, A, T, r} </math>, where A - action space. T is now a tensor, containing the current state, current action, and next state. We let <math> T_{i, j, k} = p(s_t + 1 = i | s_t = j, a_t = k) </math>. r is the reward function.
=== Reinforcement Learning Algorithms - High-level ===
# Generate Samples (run policy)
# Fit a model/estimate something about how well policy is performing
# Improve policy
# Repeat
Policy Gradients - Directly differentiate objective with respect to the optimal theta and then perform gradient descent
Value-based: Estimate value function or q-function of optimal policy (policy is often represented implicitly)
Actor-Critic: Estimate value function or q-function of current policy, and find a better policy gradient
Model-based: Estimate some transition model, and then use it to improve a policy
=== REINFORCE ===
-
=== Temporal Difference Learning ===
Temporal Difference (TD) is a model for estimating the utility of states given some state-action-outcome information. Suppose we have some initial value <math>V_0(s) </math>, and we get some information <math> (s, a, s', r(s, a) </math>. We can then use the update equation <math>V_{t+1}(s) = (1- \alpha)V_{t}(s)+\alpha(R(s, a, s') + \gamma V_i(s')) </math>. Here <math>\alpha</math> represents the learning rate, which is how much new information is weighted relative to old information, while <math>\gamma</math> represents the discount factor, which can be thought of how much getting a reward in the future factors into our current reward.
=== Q Learning ===
Q Learning gives us a way to extract the optimal policy after learning. Instead of keeping track of the values of individual states, we keep track of Q values for state-action pairs, representing the utility of taking action a at state s.
How do we use this Q value? Two main ideas.
Idea 1: Policy iteration - if we have a policy <math> \pi </math> and we know <math> Q^pi (s, a) </math>, we can improve the policy, by deterministically setting the action at each state be the argmax of all possible actions at the state.
<math> Q_{i+1} (s,a) = (1 - \alpha) Q_i (s,a) + \alpha (r(s,a) + \gamma V_i(s'))</math>
Idea 2: Gradient update - If <math> Q^pi(s, a) > V^pi(s) </math>, then a is better than average. We will then modify the policy to increase the probability of a.
c910359bfda75bbdd24c2678f6998b20c330dd71
MuJoCo WASM
0
257
1160
1116
2024-05-21T05:21:02Z
Vrtnis
21
/* Build the mujoco_wasm Binary */
wikitext
text/x-wiki
== Install emscripten ==
First, you need to install emscripten, which is a compiler toolchain for WebAssembly.
=== Get the emsdk repo ===
<code>
git clone https://github.com/emscripten-core/emsdk.git
</code>
=== Enter that directory ===
<code>
cd emsdk
</code>
=== Download and install the latest SDK tools ===
<code>
./emsdk install latest
</code>
=== Make the "latest" SDK "active" ===
<code>
./emsdk activate latest
</code>
=== Activate PATH and other environment variables ===
<code>
source ./emsdk_env.sh
</code>
These variables are set for the current terminal. If you want to make it for all terminals, you can add them to any terminal profile. Here they are:
The environment variables:
<code>
EMSDK = < path to emsdk dir >
EM_CONFIG = ~/.emscripten
EMSDK_NODE = < path to emsdk dir >/node/12.9.1_64bit/bin/node
</code>
=== Now just try it! ===
<code>
emcc
</code>
== Build the mujoco_wasm Binary ==
First git clone
<code> https://github.com/zalo/mujoco_wasm </code>
Next, you'll build the MuJoCo WebAssembly binary.
<syntaxhighlight lang="bash">
mkdir build
cd build
emcmake cmake ..
make
</syntaxhighlight>
[[File:Carbon (1).png|800px|thumb|none|emcmake cmake ..]]
[[File:Carbon (2).png|400px|thumb|none|make]]
'''Tip:''' If you get an error with "undefined symbol: saveSetjmp/testSetjmp" at the build step, revert to:
<code>
./emsdk install 3.1.56 && ./emsdk activate 3.1.56 && source ./emsdk_env.sh
</code>
== Running in Browser ==
Run this in your mujoco folder to start a server.
<code>
python -m http.server 8000
</code>
Then navigate to:
<code>
http://localhost:8000/index.html
</code>
[[File:Wasm screenshot13-40-40.png|800px|thumb|none|MuJoCo running in browser]]
5ff174df1e8eb83e2912c817c31370188123fb8d
Isaac's Algorithm Notes
0
271
1165
2024-05-21T18:11:24Z
Ben
2
Created page with "Allen's algorithm learning notes"
wikitext
text/x-wiki
Allen's algorithm learning notes
8218414d6d987f94b48002efb7d9fb69d504b2db
1168
1165
2024-05-21T18:18:47Z
Ben
2
wikitext
text/x-wiki
Isaac's algorithm learning notes
1a36deceabfaf2d0a46ddc0c6a90bd98804db217
1177
1168
2024-05-22T00:11:46Z
Is2ac
30
wikitext
text/x-wiki
Isaac's Beam Search learning notes
=== Links ===
* [https://www.width.ai/post/what-is-beam-search#:~:text=Beam%20search%20is%20an%20algorithm,probability%20or%20next%20output%20character. Width.ai Beam Search]
* [https://towardsdatascience.com/foundations-of-nlp-explained-visually-beam-search-how-it-works-1586b9849a24 towarddatascience.com visuals]
*[https://en.wikipedia.org/wiki/Log_probability Log probability wiki]
*[https://en.wikipedia.org/wiki/Softmax_function Softmax_function wiki]
[[Category:Beam Search]]
=== Motivation ===
Beam Search is used in the context of AI as a final decision making layer for many NLP and speech recognition models.
Example: Sequence to Sequence (Seq2Seq) Modeling
Task: Translate a sentence from Arabic to English.
Overview:
* '''1. Tokenization/Normalization:''' Arabic sentence is split into tokens designed for the LLM and normalized
* '''2. Encoding:''' LLM encodes the tokens into numerical tokens and generates a sequence of hidden states to represent the input sentence
* '''3. Initialize Beam Search:''' Determine the parameters of Beam Search and the decoder's initial states
* '''4. Decoding:''' Begin with start-of-sequence token. Model generates probabilities for the next token in each sequence and passes them through an output layer, including a softmax function that normalizes probabilities. Beam Search chooses which paths to continue following and prunes the rest.
* '''5. Finalize Output:''' Select Beam with highest probability as final translation, convert tokenized output back into a readable sentence with punctuation, capitalization, etc.
Now, to break down the steps of Beam Search
Conditional Probability Notes:
* Sequence to Sequence search algorithms are based on '''Conditional Probability''' -> describe the likelihood of an event happening given that another event or sequence of events has already occurred
* i.e the probability of a new token given the existing sequence is NOT independent and can be calculated <math>Prob(ABC) = Prob(AB) * Prob(C|AB)</math> where <math>Prob(C|AB)</math> is the probability of <math>C</math> occurring given that <math>AB</math> have already occurred
* As a result, the graph we run the algorithm on is NOT a Markov Chain Graph, so states are dependent.
=== Naive Approach #1: Greedy ===
* Greedy Search takes the best solution at each state in the graph, regardless of previous leaves or future leaves in the sequence. In the context of sequence to sequence, greedy search takes the highest probability word at each position in the sequence and takes it as part of the output.
Let's take <math>A, B, C</math> as possible words for our sequence with initial probabilities to be <math>A_0=0.2, B_0=0.5, C_0=0.3</math>
The greedy search will take B, the word with highest probability, to be the first word in our sequence. It will continue as such until the end of the sentence is reached.
This greedy strategy may be optimal at the current spot of the sequence, but as one might predict, the algorithm struggles with larger outputs where such a path is not the optimal.
If we have at most <math>m</math> words in our final sentence and each state has <math>n</math> word options, this algorithm runs with a time complexity of <math>O(nm)</math>
=== Naive Approach #2: Breadth First Search ===
The BFS approach considers every possible sequence of words and outputs the highest probability sequence among all.
For example, let's take <math>A, B, C</math> as possible words for our sequence. (Assuming the length of the sequence is fixed at 3) BFS will find the maximum of <math>Prob(AAA), Prob(AAB), ... Prob(CCC)</math> and output that respective sequence
BFS is guaranteed to find the optimal sequence, but its runtime is far too large to be feasible for use.
If we have at most <math>m</math> words in our final sentence and each state has <math>n</math> word options, this algorithm runs with a time complexity of <math>O(n^m)</math>
=== Beam Search ===
Beam Search is a heuristic algorithm which combines the ideas of the Greedy and BFS algorithms.
* Maintains a 'beam width' amount of the most promising sequences
* At each iteration add all possible continuations of candidate sequences, and takes the 'beam width' best from those to repeat the algorithm
* After the algorithm is finished we take the best option from the candidate sequences as our output
If we have at most <math>m</math> words in our final sentence, each state has <math>n</math> word options, and we maintain a maximum bin width size of <math>k</math>, this algorithm runs with a time complexity of <math>O(nmk log(k)</math>
[[File:Screenshot 2024-05-21 at 7.50.48 PM.png|thumb]]
'''Top-K Sampling for Beam Search''':
*<math>k</math> -> maximum width of beams during search
*all other probabilities are set to 0
*allows for a more consistent runtime
* lower value => faster runtime, less diversity, less optimal results
'''Top-P Sampling for Beam Search (Nucleus Sampling)''':
*<math>p</math> -> maximum sum of probabilities among all sequences
*usually dynamically determined based on the cumulative probabilities of the tokens in the probability distribution -> ensures a certain proportion of the probability mass is considered
*top-p sampling allows for more diversity in generated sequences
* lower value => faster runtime, less diversity, less optimal results
=== Other Notes ====
* Softmax Function:
[[File:Screenshot 2024-05-21 at 6.56.45 PM.png|thumb]]
ensures that all probabilities at each state are in the range (0,1) and sum to 1
* Using semi-log plot to deal with very small probability values trick: Given <math>P</math> as the cumulative probability score of all words in the sequence so far, <math>P=p_0*p_1*p_2...p_n</math>
* Since the value of <math>P</math> can become very small, we can run into computational rounding errors. One strategy around this is to take the natural log of the summation of all p values and use this to compare values of <math>P</math>. This works because
* <math>P=p_0*p_1*p_2...p_n</math>
* <math>log(P)=log(p_0)+log(p_1)+log(p_2)...+log(p_n)</math>
* if <math>(P_1>P_2) => log(P_1)>log(P_2)</math>, so values can still be compared in this form
c268f8230d0bf604704132732db965dae90cbc44
1178
1177
2024-05-22T00:12:18Z
Is2ac
30
/* Beam Search */
wikitext
text/x-wiki
Isaac's Beam Search learning notes
=== Links ===
* [https://www.width.ai/post/what-is-beam-search#:~:text=Beam%20search%20is%20an%20algorithm,probability%20or%20next%20output%20character. Width.ai Beam Search]
* [https://towardsdatascience.com/foundations-of-nlp-explained-visually-beam-search-how-it-works-1586b9849a24 towarddatascience.com visuals]
*[https://en.wikipedia.org/wiki/Log_probability Log probability wiki]
*[https://en.wikipedia.org/wiki/Softmax_function Softmax_function wiki]
[[Category:Beam Search]]
=== Motivation ===
Beam Search is used in the context of AI as a final decision making layer for many NLP and speech recognition models.
Example: Sequence to Sequence (Seq2Seq) Modeling
Task: Translate a sentence from Arabic to English.
Overview:
* '''1. Tokenization/Normalization:''' Arabic sentence is split into tokens designed for the LLM and normalized
* '''2. Encoding:''' LLM encodes the tokens into numerical tokens and generates a sequence of hidden states to represent the input sentence
* '''3. Initialize Beam Search:''' Determine the parameters of Beam Search and the decoder's initial states
* '''4. Decoding:''' Begin with start-of-sequence token. Model generates probabilities for the next token in each sequence and passes them through an output layer, including a softmax function that normalizes probabilities. Beam Search chooses which paths to continue following and prunes the rest.
* '''5. Finalize Output:''' Select Beam with highest probability as final translation, convert tokenized output back into a readable sentence with punctuation, capitalization, etc.
Now, to break down the steps of Beam Search
Conditional Probability Notes:
* Sequence to Sequence search algorithms are based on '''Conditional Probability''' -> describe the likelihood of an event happening given that another event or sequence of events has already occurred
* i.e the probability of a new token given the existing sequence is NOT independent and can be calculated <math>Prob(ABC) = Prob(AB) * Prob(C|AB)</math> where <math>Prob(C|AB)</math> is the probability of <math>C</math> occurring given that <math>AB</math> have already occurred
* As a result, the graph we run the algorithm on is NOT a Markov Chain Graph, so states are dependent.
=== Naive Approach #1: Greedy ===
* Greedy Search takes the best solution at each state in the graph, regardless of previous leaves or future leaves in the sequence. In the context of sequence to sequence, greedy search takes the highest probability word at each position in the sequence and takes it as part of the output.
Let's take <math>A, B, C</math> as possible words for our sequence with initial probabilities to be <math>A_0=0.2, B_0=0.5, C_0=0.3</math>
The greedy search will take B, the word with highest probability, to be the first word in our sequence. It will continue as such until the end of the sentence is reached.
This greedy strategy may be optimal at the current spot of the sequence, but as one might predict, the algorithm struggles with larger outputs where such a path is not the optimal.
If we have at most <math>m</math> words in our final sentence and each state has <math>n</math> word options, this algorithm runs with a time complexity of <math>O(nm)</math>
=== Naive Approach #2: Breadth First Search ===
The BFS approach considers every possible sequence of words and outputs the highest probability sequence among all.
For example, let's take <math>A, B, C</math> as possible words for our sequence. (Assuming the length of the sequence is fixed at 3) BFS will find the maximum of <math>Prob(AAA), Prob(AAB), ... Prob(CCC)</math> and output that respective sequence
BFS is guaranteed to find the optimal sequence, but its runtime is far too large to be feasible for use.
If we have at most <math>m</math> words in our final sentence and each state has <math>n</math> word options, this algorithm runs with a time complexity of <math>O(n^m)</math>
=== Beam Search ===
Beam Search is a heuristic algorithm which combines the ideas of the Greedy and BFS algorithms.
* Maintains a 'beam width' amount of the most promising sequences
* At each iteration add all possible continuations of candidate sequences, and takes the 'beam width' best from those to repeat the algorithm
* After the algorithm is finished we take the best option from the candidate sequences as our output
If we have at most <math>m</math> words in our final sentence, each state has <math>n</math> word options, and we maintain a maximum bin width size of <math>k</math>, this algorithm runs with a time complexity of <math>O(nmk log(k))</math>
[[File:Screenshot 2024-05-21 at 7.50.48 PM.png|thumb]]
'''Top-K Sampling for Beam Search''':
*<math>k</math> -> maximum width of beams during search
*all other probabilities are set to 0
*allows for a more consistent runtime
* lower value => faster runtime, less diversity, less optimal results
'''Top-P Sampling for Beam Search (Nucleus Sampling)''':
*<math>p</math> -> maximum sum of probabilities among all sequences
*usually dynamically determined based on the cumulative probabilities of the tokens in the probability distribution -> ensures a certain proportion of the probability mass is considered
*top-p sampling allows for more diversity in generated sequences
* lower value => faster runtime, less diversity, less optimal results
=== Other Notes ====
* Softmax Function:
[[File:Screenshot 2024-05-21 at 6.56.45 PM.png|thumb]]
ensures that all probabilities at each state are in the range (0,1) and sum to 1
* Using semi-log plot to deal with very small probability values trick: Given <math>P</math> as the cumulative probability score of all words in the sequence so far, <math>P=p_0*p_1*p_2...p_n</math>
* Since the value of <math>P</math> can become very small, we can run into computational rounding errors. One strategy around this is to take the natural log of the summation of all p values and use this to compare values of <math>P</math>. This works because
* <math>P=p_0*p_1*p_2...p_n</math>
* <math>log(P)=log(p_0)+log(p_1)+log(p_2)...+log(p_n)</math>
* if <math>(P_1>P_2) => log(P_1)>log(P_2)</math>, so values can still be compared in this form
300d0d6bfac1083948620ac4f714c86d836569b1
1179
1178
2024-05-22T00:13:10Z
Is2ac
30
/* Other Notes = */
wikitext
text/x-wiki
Isaac's Beam Search learning notes
=== Links ===
* [https://www.width.ai/post/what-is-beam-search#:~:text=Beam%20search%20is%20an%20algorithm,probability%20or%20next%20output%20character. Width.ai Beam Search]
* [https://towardsdatascience.com/foundations-of-nlp-explained-visually-beam-search-how-it-works-1586b9849a24 towarddatascience.com visuals]
*[https://en.wikipedia.org/wiki/Log_probability Log probability wiki]
*[https://en.wikipedia.org/wiki/Softmax_function Softmax_function wiki]
[[Category:Beam Search]]
=== Motivation ===
Beam Search is used in the context of AI as a final decision making layer for many NLP and speech recognition models.
Example: Sequence to Sequence (Seq2Seq) Modeling
Task: Translate a sentence from Arabic to English.
Overview:
* '''1. Tokenization/Normalization:''' Arabic sentence is split into tokens designed for the LLM and normalized
* '''2. Encoding:''' LLM encodes the tokens into numerical tokens and generates a sequence of hidden states to represent the input sentence
* '''3. Initialize Beam Search:''' Determine the parameters of Beam Search and the decoder's initial states
* '''4. Decoding:''' Begin with start-of-sequence token. Model generates probabilities for the next token in each sequence and passes them through an output layer, including a softmax function that normalizes probabilities. Beam Search chooses which paths to continue following and prunes the rest.
* '''5. Finalize Output:''' Select Beam with highest probability as final translation, convert tokenized output back into a readable sentence with punctuation, capitalization, etc.
Now, to break down the steps of Beam Search
Conditional Probability Notes:
* Sequence to Sequence search algorithms are based on '''Conditional Probability''' -> describe the likelihood of an event happening given that another event or sequence of events has already occurred
* i.e the probability of a new token given the existing sequence is NOT independent and can be calculated <math>Prob(ABC) = Prob(AB) * Prob(C|AB)</math> where <math>Prob(C|AB)</math> is the probability of <math>C</math> occurring given that <math>AB</math> have already occurred
* As a result, the graph we run the algorithm on is NOT a Markov Chain Graph, so states are dependent.
=== Naive Approach #1: Greedy ===
* Greedy Search takes the best solution at each state in the graph, regardless of previous leaves or future leaves in the sequence. In the context of sequence to sequence, greedy search takes the highest probability word at each position in the sequence and takes it as part of the output.
Let's take <math>A, B, C</math> as possible words for our sequence with initial probabilities to be <math>A_0=0.2, B_0=0.5, C_0=0.3</math>
The greedy search will take B, the word with highest probability, to be the first word in our sequence. It will continue as such until the end of the sentence is reached.
This greedy strategy may be optimal at the current spot of the sequence, but as one might predict, the algorithm struggles with larger outputs where such a path is not the optimal.
If we have at most <math>m</math> words in our final sentence and each state has <math>n</math> word options, this algorithm runs with a time complexity of <math>O(nm)</math>
=== Naive Approach #2: Breadth First Search ===
The BFS approach considers every possible sequence of words and outputs the highest probability sequence among all.
For example, let's take <math>A, B, C</math> as possible words for our sequence. (Assuming the length of the sequence is fixed at 3) BFS will find the maximum of <math>Prob(AAA), Prob(AAB), ... Prob(CCC)</math> and output that respective sequence
BFS is guaranteed to find the optimal sequence, but its runtime is far too large to be feasible for use.
If we have at most <math>m</math> words in our final sentence and each state has <math>n</math> word options, this algorithm runs with a time complexity of <math>O(n^m)</math>
=== Beam Search ===
Beam Search is a heuristic algorithm which combines the ideas of the Greedy and BFS algorithms.
* Maintains a 'beam width' amount of the most promising sequences
* At each iteration add all possible continuations of candidate sequences, and takes the 'beam width' best from those to repeat the algorithm
* After the algorithm is finished we take the best option from the candidate sequences as our output
If we have at most <math>m</math> words in our final sentence, each state has <math>n</math> word options, and we maintain a maximum bin width size of <math>k</math>, this algorithm runs with a time complexity of <math>O(nmk log(k))</math>
[[File:Screenshot 2024-05-21 at 7.50.48 PM.png|thumb]]
'''Top-K Sampling for Beam Search''':
*<math>k</math> -> maximum width of beams during search
*all other probabilities are set to 0
*allows for a more consistent runtime
* lower value => faster runtime, less diversity, less optimal results
'''Top-P Sampling for Beam Search (Nucleus Sampling)''':
*<math>p</math> -> maximum sum of probabilities among all sequences
*usually dynamically determined based on the cumulative probabilities of the tokens in the probability distribution -> ensures a certain proportion of the probability mass is considered
*top-p sampling allows for more diversity in generated sequences
* lower value => faster runtime, less diversity, less optimal results
=== Other Notes ====
'''Softmax Function:'''
[[File:Screenshot 2024-05-21 at 6.56.45 PM.png|thumb]]
* ensures that all probabilities at each state are in the range (0,1) and sum to 1
'''Semi-log plot trick'''
* Using semi-log plot to deal with very small probability values trick: Given <math>P</math> as the cumulative probability score of all words in the sequence so far, <math>P=p_0*p_1*p_2...p_n</math>
* Since the value of <math>P</math> can become very small, we can run into computational rounding errors. One strategy around this is to take the natural log of the summation of all p values and use this to compare values of <math>P</math>. This works because
* <math>P=p_0*p_1*p_2...p_n</math>
* <math>log(P)=log(p_0)+log(p_1)+log(p_2)...+log(p_n)</math>
* if <math>(P_1>P_2) => log(P_1)>log(P_2)</math>, so values can still be compared in this form
1be513bfeb7e40dfcd7c48cd377767f1119db532
1180
1179
2024-05-22T00:40:27Z
Is2ac
30
/* Other Notes = */
wikitext
text/x-wiki
Isaac's Beam Search learning notes
=== Links ===
* [https://www.width.ai/post/what-is-beam-search#:~:text=Beam%20search%20is%20an%20algorithm,probability%20or%20next%20output%20character. Width.ai Beam Search]
* [https://towardsdatascience.com/foundations-of-nlp-explained-visually-beam-search-how-it-works-1586b9849a24 towarddatascience.com visuals]
*[https://en.wikipedia.org/wiki/Log_probability Log probability wiki]
*[https://en.wikipedia.org/wiki/Softmax_function Softmax_function wiki]
[[Category:Beam Search]]
=== Motivation ===
Beam Search is used in the context of AI as a final decision making layer for many NLP and speech recognition models.
Example: Sequence to Sequence (Seq2Seq) Modeling
Task: Translate a sentence from Arabic to English.
Overview:
* '''1. Tokenization/Normalization:''' Arabic sentence is split into tokens designed for the LLM and normalized
* '''2. Encoding:''' LLM encodes the tokens into numerical tokens and generates a sequence of hidden states to represent the input sentence
* '''3. Initialize Beam Search:''' Determine the parameters of Beam Search and the decoder's initial states
* '''4. Decoding:''' Begin with start-of-sequence token. Model generates probabilities for the next token in each sequence and passes them through an output layer, including a softmax function that normalizes probabilities. Beam Search chooses which paths to continue following and prunes the rest.
* '''5. Finalize Output:''' Select Beam with highest probability as final translation, convert tokenized output back into a readable sentence with punctuation, capitalization, etc.
Now, to break down the steps of Beam Search
Conditional Probability Notes:
* Sequence to Sequence search algorithms are based on '''Conditional Probability''' -> describe the likelihood of an event happening given that another event or sequence of events has already occurred
* i.e the probability of a new token given the existing sequence is NOT independent and can be calculated <math>Prob(ABC) = Prob(AB) * Prob(C|AB)</math> where <math>Prob(C|AB)</math> is the probability of <math>C</math> occurring given that <math>AB</math> have already occurred
* As a result, the graph we run the algorithm on is NOT a Markov Chain Graph, so states are dependent.
=== Naive Approach #1: Greedy ===
* Greedy Search takes the best solution at each state in the graph, regardless of previous leaves or future leaves in the sequence. In the context of sequence to sequence, greedy search takes the highest probability word at each position in the sequence and takes it as part of the output.
Let's take <math>A, B, C</math> as possible words for our sequence with initial probabilities to be <math>A_0=0.2, B_0=0.5, C_0=0.3</math>
The greedy search will take B, the word with highest probability, to be the first word in our sequence. It will continue as such until the end of the sentence is reached.
This greedy strategy may be optimal at the current spot of the sequence, but as one might predict, the algorithm struggles with larger outputs where such a path is not the optimal.
If we have at most <math>m</math> words in our final sentence and each state has <math>n</math> word options, this algorithm runs with a time complexity of <math>O(nm)</math>
=== Naive Approach #2: Breadth First Search ===
The BFS approach considers every possible sequence of words and outputs the highest probability sequence among all.
For example, let's take <math>A, B, C</math> as possible words for our sequence. (Assuming the length of the sequence is fixed at 3) BFS will find the maximum of <math>Prob(AAA), Prob(AAB), ... Prob(CCC)</math> and output that respective sequence
BFS is guaranteed to find the optimal sequence, but its runtime is far too large to be feasible for use.
If we have at most <math>m</math> words in our final sentence and each state has <math>n</math> word options, this algorithm runs with a time complexity of <math>O(n^m)</math>
=== Beam Search ===
Beam Search is a heuristic algorithm which combines the ideas of the Greedy and BFS algorithms.
* Maintains a 'beam width' amount of the most promising sequences
* At each iteration add all possible continuations of candidate sequences, and takes the 'beam width' best from those to repeat the algorithm
* After the algorithm is finished we take the best option from the candidate sequences as our output
If we have at most <math>m</math> words in our final sentence, each state has <math>n</math> word options, and we maintain a maximum bin width size of <math>k</math>, this algorithm runs with a time complexity of <math>O(nmk log(k))</math>
[[File:Screenshot 2024-05-21 at 7.50.48 PM.png|thumb]]
'''Top-K Sampling for Beam Search''':
*<math>k</math> -> maximum width of beams during search
*all other probabilities are set to 0
*allows for a more consistent runtime
* lower value => faster runtime, less diversity, less optimal results
'''Top-P Sampling for Beam Search (Nucleus Sampling)''':
*<math>p</math> -> maximum sum of probabilities among all sequences
*usually dynamically determined based on the cumulative probabilities of the tokens in the probability distribution -> ensures a certain proportion of the probability mass is considered
*top-p sampling allows for more diversity in generated sequences
* lower value => faster runtime, less diversity, less optimal results
=== Other Notes ===
'''Softmax Function:'''
[[File:Screenshot 2024-05-21 at 6.56.45 PM.png|thumb]]
* ensures that all probabilities at each state are in the range (0,1) and sum to 1
'''Semi-log plot trick'''
* Using semi-log plot to deal with very small probability values trick: Given <math>P</math> as the cumulative probability score of all words in the sequence so far, <math>P=p_0*p_1*p_2...p_n</math>
* Since the value of <math>P</math> can become very small, we can run into computational rounding errors. One strategy around this is to take the natural log of the summation of all p values and use this to compare values of <math>P</math>. This works because
* <math>P=p_0*p_1*p_2...p_n</math>
* <math>log(P)=log(p_0)+log(p_1)+log(p_2)...+log(p_n)</math>
* if <math>(P_1>P_2) => log(P_1)>log(P_2)</math>, so values can still be compared in this form
f199c71bfedd597637d2e32f2e1b91030186a5d5
1181
1180
2024-05-22T00:47:52Z
Is2ac
30
wikitext
text/x-wiki
Isaac's Beam Search learning notes
=== Links ===
* [https://www.width.ai/post/what-is-beam-search#:~:text=Beam%20search%20is%20an%20algorithm,probability%20or%20next%20output%20character. Width.ai Beam Search]
* [https://towardsdatascience.com/foundations-of-nlp-explained-visually-beam-search-how-it-works-1586b9849a24 towarddatascience.com visuals]
*[https://en.wikipedia.org/wiki/Log_probability Log probability wiki]
*[https://en.wikipedia.org/wiki/Softmax_function Softmax_function wiki]
[[Category:Beam Search]]
=== Motivation ===
Beam Search is used in the context of AI as a final decision making layer for many NLP and speech recognition models.
Example: Sequence to Sequence (Seq2Seq) Modeling
Task: Translate a sentence from Arabic to English.
Overview:
* '''1. Tokenization/Normalization:''' Arabic sentence is split into tokens designed for the LLM and normalized
* '''2. Encoding:''' LLM encodes the tokens into numerical tokens and generates a sequence of hidden states to represent the input sentence
* '''3. Initialize Beam Search:''' Determine the parameters of Beam Search and the decoder's initial states
* '''4. Decoding:''' Begin with start-of-sequence token. Model generates probabilities for the next token in each sequence and passes them through an output layer, including a softmax function that normalizes probabilities. Beam Search chooses which paths to continue following and prunes the rest.
* '''5. Finalize Output:''' Select Beam with highest probability as final translation, convert tokenized output back into a readable sentence with punctuation, capitalization, etc.
Conditional Probability Notes:
* Sequence to Sequence search algorithms are based on '''Conditional Probability''' -> describe the likelihood of an event happening given that another event or sequence of events has already occurred
* i.e the probability of a new token given the existing sequence is NOT independent and can be calculated <math>Prob(ABC) = Prob(AB) * Prob(C|AB)</math> where <math>Prob(C|AB)</math> is the probability of <math>C</math> occurring given that <math>AB</math> have already occurred
* As a result, the graph we run the algorithm on is NOT a Markov Chain Graph, so states are dependent.
=== Naive Approach #1: Greedy ===
* Greedy Search takes the best solution at each state in the graph, regardless of previous leaves or future leaves in the sequence. In the context of sequence to sequence, greedy search takes the highest probability word at each position in the sequence and takes it as part of the output.
Let's take <math>A, B, C</math> as possible words for our sequence with initial probabilities to be <math>A_0=0.2, B_0=0.5, C_0=0.3</math>
The greedy search will take B, the word with highest probability, to be the first word in our sequence. It will continue as such until the end of the sentence is reached.
This greedy strategy may be optimal at the current spot of the sequence, but as one might predict, the algorithm struggles with larger outputs where such a path is not the optimal.
If we have at most <math>m</math> words in our final sentence and each state has <math>n</math> word options, this algorithm runs with a time complexity of <math>O(nm)</math>
=== Naive Approach #2: Breadth First Search ===
The BFS approach considers every possible sequence of words and outputs the highest probability sequence among all.
For example, let's take <math>A, B, C</math> as possible words for our sequence. (Assuming the length of the sequence is fixed at 3) BFS will find the maximum of <math>Prob(AAA), Prob(AAB), ... Prob(CCC)</math> and output that respective sequence
BFS is guaranteed to find the optimal sequence, but its runtime is far too large to be feasible for use.
If we have at most <math>m</math> words in our final sentence and each state has <math>n</math> word options, this algorithm runs with a time complexity of <math>O(n^m)</math>
=== Beam Search ===
Beam Search is a heuristic algorithm which combines the ideas of the Greedy and BFS algorithms.
* Maintains a 'beam width' amount of the most promising sequences
* At each iteration add all possible continuations of candidate sequences, and takes the 'beam width' best from those to repeat the algorithm
* After the algorithm is finished we take the best option from the candidate sequences as our output
If we have at most <math>m</math> words in our final sentence, each state has <math>n</math> word options, and we maintain a maximum bin width size of <math>k</math>, this algorithm runs with a time complexity of <math>O(nmk log(k))</math>
[[File:Screenshot 2024-05-21 at 7.50.48 PM.png|thumb]]
'''Top-K Sampling for Beam Search''':
*<math>k</math> -> maximum width of beams during search
*all other probabilities are set to 0
*allows for a more consistent runtime
* lower value => faster runtime, less diversity, less optimal results
'''Top-P Sampling for Beam Search (Nucleus Sampling)''':
*<math>p</math> -> maximum sum of probabilities among all sequences
*usually dynamically determined based on the cumulative probabilities of the tokens in the probability distribution -> ensures a certain proportion of the probability mass is considered
*top-p sampling allows for more diversity in generated sequences
* lower value => faster runtime, less diversity, less optimal results
=== Other Notes ===
'''Softmax Function:'''
[[File:Screenshot 2024-05-21 at 6.56.45 PM.png|thumb]]
* ensures that all probabilities at each state are in the range (0,1) and sum to 1
'''Semi-log plot trick'''
* Using semi-log plot to deal with very small probability values trick: Given <math>P</math> as the cumulative probability score of all words in the sequence so far, <math>P=p_0*p_1*p_2...p_n</math>
* Since the value of <math>P</math> can become very small, we can run into computational rounding errors. One strategy around this is to take the natural log of the summation of all p values and use this to compare values of <math>P</math>. This works because
* <math>P=p_0*p_1*p_2...p_n</math>
* <math>log(P)=log(p_0)+log(p_1)+log(p_2)...+log(p_n)</math>
* if <math>(P_1>P_2) => log(P_1)>log(P_2)</math>, so values can still be compared in this form
c9ce6d4d967beefa8025225be184a4442b6adb7b
1182
1181
2024-05-22T00:55:02Z
Ben
2
/* Other Notes */
wikitext
text/x-wiki
Isaac's Beam Search learning notes
=== Links ===
* [https://www.width.ai/post/what-is-beam-search#:~:text=Beam%20search%20is%20an%20algorithm,probability%20or%20next%20output%20character. Width.ai Beam Search]
* [https://towardsdatascience.com/foundations-of-nlp-explained-visually-beam-search-how-it-works-1586b9849a24 towarddatascience.com visuals]
*[https://en.wikipedia.org/wiki/Log_probability Log probability wiki]
*[https://en.wikipedia.org/wiki/Softmax_function Softmax_function wiki]
[[Category:Beam Search]]
=== Motivation ===
Beam Search is used in the context of AI as a final decision making layer for many NLP and speech recognition models.
Example: Sequence to Sequence (Seq2Seq) Modeling
Task: Translate a sentence from Arabic to English.
Overview:
* '''1. Tokenization/Normalization:''' Arabic sentence is split into tokens designed for the LLM and normalized
* '''2. Encoding:''' LLM encodes the tokens into numerical tokens and generates a sequence of hidden states to represent the input sentence
* '''3. Initialize Beam Search:''' Determine the parameters of Beam Search and the decoder's initial states
* '''4. Decoding:''' Begin with start-of-sequence token. Model generates probabilities for the next token in each sequence and passes them through an output layer, including a softmax function that normalizes probabilities. Beam Search chooses which paths to continue following and prunes the rest.
* '''5. Finalize Output:''' Select Beam with highest probability as final translation, convert tokenized output back into a readable sentence with punctuation, capitalization, etc.
Conditional Probability Notes:
* Sequence to Sequence search algorithms are based on '''Conditional Probability''' -> describe the likelihood of an event happening given that another event or sequence of events has already occurred
* i.e the probability of a new token given the existing sequence is NOT independent and can be calculated <math>Prob(ABC) = Prob(AB) * Prob(C|AB)</math> where <math>Prob(C|AB)</math> is the probability of <math>C</math> occurring given that <math>AB</math> have already occurred
* As a result, the graph we run the algorithm on is NOT a Markov Chain Graph, so states are dependent.
=== Naive Approach #1: Greedy ===
* Greedy Search takes the best solution at each state in the graph, regardless of previous leaves or future leaves in the sequence. In the context of sequence to sequence, greedy search takes the highest probability word at each position in the sequence and takes it as part of the output.
Let's take <math>A, B, C</math> as possible words for our sequence with initial probabilities to be <math>A_0=0.2, B_0=0.5, C_0=0.3</math>
The greedy search will take B, the word with highest probability, to be the first word in our sequence. It will continue as such until the end of the sentence is reached.
This greedy strategy may be optimal at the current spot of the sequence, but as one might predict, the algorithm struggles with larger outputs where such a path is not the optimal.
If we have at most <math>m</math> words in our final sentence and each state has <math>n</math> word options, this algorithm runs with a time complexity of <math>O(nm)</math>
=== Naive Approach #2: Breadth First Search ===
The BFS approach considers every possible sequence of words and outputs the highest probability sequence among all.
For example, let's take <math>A, B, C</math> as possible words for our sequence. (Assuming the length of the sequence is fixed at 3) BFS will find the maximum of <math>Prob(AAA), Prob(AAB), ... Prob(CCC)</math> and output that respective sequence
BFS is guaranteed to find the optimal sequence, but its runtime is far too large to be feasible for use.
If we have at most <math>m</math> words in our final sentence and each state has <math>n</math> word options, this algorithm runs with a time complexity of <math>O(n^m)</math>
=== Beam Search ===
Beam Search is a heuristic algorithm which combines the ideas of the Greedy and BFS algorithms.
* Maintains a 'beam width' amount of the most promising sequences
* At each iteration add all possible continuations of candidate sequences, and takes the 'beam width' best from those to repeat the algorithm
* After the algorithm is finished we take the best option from the candidate sequences as our output
If we have at most <math>m</math> words in our final sentence, each state has <math>n</math> word options, and we maintain a maximum bin width size of <math>k</math>, this algorithm runs with a time complexity of <math>O(nmk log(k))</math>
[[File:Screenshot 2024-05-21 at 7.50.48 PM.png|thumb]]
'''Top-K Sampling for Beam Search''':
*<math>k</math> -> maximum width of beams during search
*all other probabilities are set to 0
*allows for a more consistent runtime
* lower value => faster runtime, less diversity, less optimal results
'''Top-P Sampling for Beam Search (Nucleus Sampling)''':
*<math>p</math> -> maximum sum of probabilities among all sequences
*usually dynamically determined based on the cumulative probabilities of the tokens in the probability distribution -> ensures a certain proportion of the probability mass is considered
*top-p sampling allows for more diversity in generated sequences
* lower value => faster runtime, less diversity, less optimal results
=== Other Notes ===
'''Softmax Function:'''
[[File:Screenshot 2024-05-21 at 6.56.45 PM.png|thumb]]
* ensures that all probabilities at each state are in the range (0,1) and sum to 1
'''Log semiring trick'''
* Using [https://en.wikipedia.org/wiki/Log_semiring log semiring] to deal with very small probability values trick: Given <math>P</math> as the cumulative probability score of all words in the sequence so far, <math>P=p_0*p_1*p_2...p_n</math>
* Since the value of <math>P</math> can become very small, we can run into computational rounding errors. One strategy around this is to take the natural log of the summation of all p values and use this to compare values of <math>P</math>. This works because
* <math>P=p_0*p_1*p_2...p_n</math>
* <math>log(P)=log(p_0)+log(p_1)+log(p_2)...+log(p_n)</math>
* if <math>(P_1>P_2) => log(P_1)>log(P_2)</math>, so values can still be compared in this form
8ced1c13c9ceff0d36270f3a91f3d79e29144d33
Setting Up MediaWiki on AWS
0
265
1169
1132
2024-05-21T18:27:46Z
Ben
2
/* Extras */
wikitext
text/x-wiki
This document contains a walk-through of how to set up MediaWiki on AWS. In total, this process should take about 30 minutes.
=== Getting Started ===
# Use [https://aws.amazon.com/marketplace/pp/prodview-3tokjpxwvddp2 this service] to install MediaWiki to an EC2 instance
# After installing, SSH into the instance. The MediaWiki root files live in <code>/var/www/html</code>, and Apache configuration files live in <code>/etc/apache2</code>
=== Set up A Record ===
In your website DNS, create an A record which points from your desired URL to the IP address of the newly-created EC2 instance. For example:
* '''Host name''': <code>@</code> (for the main domains) or <code>wiki</code> (for subdomains)
* '''Type''': <code>A</code>
* '''Data''': <code>127.0.0.1</code> (use the IP address of the EC2 instance)
=== Installing OpenSSL ===
Roughly follow [https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-20-04 these instructions] to install OpenSSL using LetsEncrypt
# Install certbot
<syntaxhighlight lang="bash">
sudo apt install certbot python3-certbot-apache
</syntaxhighlight>
# Run certbot (you can just select "No redirect"
<syntaxhighlight lang="bash">
sudo certbot --apache
</syntaxhighlight>
# Verify certbot
<syntaxhighlight lang="bash">
sudo systemctl status certbot.timer
</syntaxhighlight>
=== Update Page URLs ===
To get MediaWiki to display pages using the <code>/w/Some_Page</code> URL, modify <code>LocalSettings.php</code>. Replace the line where it says <code>$wgScriptPage = "";</code> with the following:
<syntaxhighlight lang="php">
$wgScriptPath = "";
$wgArticlePath = "/w/$1";
$wgUsePathInfo = true;
</syntaxhighlight>
Next, in <code>/etc/apache2/apache2.conf</code>, add the following section:
<syntaxhighlight lang="text">
<Directory /var/www/html>
AllowOverride all
</Directory>
</syntaxhighlight>
Create a new <code>.htaccess</code> in <code>/var/www/html</code> with the following contents:
<syntaxhighlight lang="text">
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^w/(.*)$ index.php/$1 [PT,L,QSA]
</syntaxhighlight>
Finally, reload Apache2:
<syntaxhighlight lang="bash">
sudo service apache2 reload
</syntaxhighlight>
=== Extras ===
Here are some notes about some extra features you can add on top of MediaWiki.
==== Storing images in S3 with Cloudfront ====
You can use the [https://www.mediawiki.org/wiki/Extension:AWS AWS MediaWiki Extension] to allow users to upload images and files, store them in S3, and serve them through CloudFront.
This process is somewhat involved. I may write notes about how to do this well in the future.
==== Enabling LaTeX ====
# Install the [https://www.mediawiki.org/wiki/Extension:Math Math] extension to the <code>extensions</code> directory
# Change to the <code>origin/REL1_31</code> branch
# Run the update script: <code>sudo php maintenance/update.php</code>
6a48a4427e157ca0e0dc98888d971b4a6221f2f8
1170
1169
2024-05-21T18:31:42Z
Ben
2
/* Enabling LaTeX */
wikitext
text/x-wiki
This document contains a walk-through of how to set up MediaWiki on AWS. In total, this process should take about 30 minutes.
=== Getting Started ===
# Use [https://aws.amazon.com/marketplace/pp/prodview-3tokjpxwvddp2 this service] to install MediaWiki to an EC2 instance
# After installing, SSH into the instance. The MediaWiki root files live in <code>/var/www/html</code>, and Apache configuration files live in <code>/etc/apache2</code>
=== Set up A Record ===
In your website DNS, create an A record which points from your desired URL to the IP address of the newly-created EC2 instance. For example:
* '''Host name''': <code>@</code> (for the main domains) or <code>wiki</code> (for subdomains)
* '''Type''': <code>A</code>
* '''Data''': <code>127.0.0.1</code> (use the IP address of the EC2 instance)
=== Installing OpenSSL ===
Roughly follow [https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-20-04 these instructions] to install OpenSSL using LetsEncrypt
# Install certbot
<syntaxhighlight lang="bash">
sudo apt install certbot python3-certbot-apache
</syntaxhighlight>
# Run certbot (you can just select "No redirect"
<syntaxhighlight lang="bash">
sudo certbot --apache
</syntaxhighlight>
# Verify certbot
<syntaxhighlight lang="bash">
sudo systemctl status certbot.timer
</syntaxhighlight>
=== Update Page URLs ===
To get MediaWiki to display pages using the <code>/w/Some_Page</code> URL, modify <code>LocalSettings.php</code>. Replace the line where it says <code>$wgScriptPage = "";</code> with the following:
<syntaxhighlight lang="php">
$wgScriptPath = "";
$wgArticlePath = "/w/$1";
$wgUsePathInfo = true;
</syntaxhighlight>
Next, in <code>/etc/apache2/apache2.conf</code>, add the following section:
<syntaxhighlight lang="text">
<Directory /var/www/html>
AllowOverride all
</Directory>
</syntaxhighlight>
Create a new <code>.htaccess</code> in <code>/var/www/html</code> with the following contents:
<syntaxhighlight lang="text">
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^w/(.*)$ index.php/$1 [PT,L,QSA]
</syntaxhighlight>
Finally, reload Apache2:
<syntaxhighlight lang="bash">
sudo service apache2 reload
</syntaxhighlight>
=== Extras ===
Here are some notes about some extra features you can add on top of MediaWiki.
==== Storing images in S3 with Cloudfront ====
You can use the [https://www.mediawiki.org/wiki/Extension:AWS AWS MediaWiki Extension] to allow users to upload images and files, store them in S3, and serve them through CloudFront.
This process is somewhat involved. I may write notes about how to do this well in the future.
==== Enabling LaTeX ====
* Install the <code>REL1_31</code> branch of the [https://www.mediawiki.org/wiki/Extension:Math Math] extension to the <code>extensions</code> directory
<syntaxhighlight lang="bash">
sudo git clone -b REL1_31 --single-branch --depth 1 https://gerrit.wikimedia.org/r/mediawiki/extensions/Math
</syntaxhighlight>
* Enable the extension in <code>LocalSettings.php</code>
<syntaxhighlight lang="php">
wfLoadExtension('Math');
</syntaxhighlight>
* Run the update script
<syntaxhighlight lang="bash">
sudo php maintenance/update.php
</syntaxhighlight>
cdf268539a8538ebe7b8e58c8d2eb4c83fef9b59
MuJoCo MJX
0
272
1171
2024-05-21T21:34:04Z
Vrtnis
21
Created page with "== Overview == '''MuJoCo XLA (MJX)''' is a specialized extension of the MuJoCo physics engine, designed to run simulations on hardware supported by the XLA (Accelerated Linea..."
wikitext
text/x-wiki
== Overview ==
'''MuJoCo XLA (MJX)''' is a specialized extension of the MuJoCo physics engine, designed to run simulations on hardware supported by the XLA (Accelerated Linear Algebra) compiler via the JAX framework.
5a7feddccaac5a741ad21783986ac6d7e1ac7668
1172
1171
2024-05-21T21:34:27Z
Vrtnis
21
wikitext
text/x-wiki
== Overview ==
'''MuJoCo XLA (MJX)''' is a specialized extension of the [[MuJoCo]] physics engine, designed to run simulations on hardware supported by the XLA (Accelerated Linear Algebra) compiler via the JAX framework.
433afe4e8c0638664314dea0e02964043d76bafa
1173
1172
2024-05-21T21:41:29Z
Vrtnis
21
wikitext
text/x-wiki
== Overview ==
'''MuJoCo XLA (MJX)''' is a specialized extension of the [[MuJoCo]] physics engine, designed to run simulations on hardware supported by the XLA (Accelerated Linear Algebra) compiler via the JAX framework.
=== Installation ===
Install using:
<syntaxhighlight lang="bash">
pip install mujoco-mjx
</syntaxhighlight>
a09ba0fc598c58c1ce59a85894d65d3b99096b20
1174
1173
2024-05-21T21:42:14Z
Vrtnis
21
wikitext
text/x-wiki
== Overview ==
'''MuJoCo XLA (MJX)''' is a specialized extension of the [[MuJoCo]] physics engine, designed to run simulations on hardware supported by the XLA (Accelerated Linear Algebra) compiler via the JAX framework.
=== Installation ===
Install using:
<syntaxhighlight lang="bash">
pip install mujoco-mjx
</syntaxhighlight>
== Colab Tutorial ==
A detailed tutorial demonstrating the use of MJX along with reinforcement learning to train humanoid and quadruped robots to locomote is available [https://colab.research.google.com/github/google-deepmind/mujoco/blob/main/mjx/tutorial.ipynb#scrollTo=MpkYHwCqk7W- here].
13b5abfa3817ffe2647e93b862535292b577f8cb
1183
1174
2024-05-22T04:29:30Z
Vrtnis
21
wikitext
text/x-wiki
== Overview ==
'''MuJoCo XLA (MJX)''' is a specialized extension of the [[MuJoCo]] physics engine, designed to run simulations on hardware supported by the XLA (Accelerated Linear Algebra) compiler via the JAX framework. Running single threaded physics simulation on the GPU is not very efficient. The advantage with MJX is that we can run environments in parallel on a hardware accelerated device.
=== Installation ===
Install using:
<syntaxhighlight lang="bash">
pip install mujoco-mjx
</syntaxhighlight>
== Colab Tutorial ==
A detailed tutorial demonstrating the use of MJX along with reinforcement learning to train humanoid and quadruped robots to locomote is available [https://colab.research.google.com/github/google-deepmind/mujoco/blob/main/mjx/tutorial.ipynb#scrollTo=MpkYHwCqk7W- here].
4b112ba4d1c8f46e0c74006aedeff361af99c891
File:Screenshot 2024-05-21 at 6.56.45 PM.png
6
273
1175
2024-05-21T22:57:36Z
Is2ac
30
wikitext
text/x-wiki
Softmax Formula
64ee9f66ab39e3890d24cb48ad79b824588d7eda
File:Screenshot 2024-05-21 at 7.50.48 PM.png
6
274
1176
2024-05-21T23:51:40Z
Is2ac
30
wikitext
text/x-wiki
Beam Search graphic from towardsdatascience.com
177914ee2d51b1c90ff554e31d1b9db386a83d86
Humanoid Gym
0
275
1184
2024-05-22T17:06:46Z
Vrtnis
21
Created page with "Humanoid-Gym is an advanced reinforcement learning (RL) framework built on Nvidia Isaac Gym, designed for training locomotion skills in humanoid robots. Notably, it emphasizes..."
wikitext
text/x-wiki
Humanoid-Gym is an advanced reinforcement learning (RL) framework built on Nvidia Isaac Gym, designed for training locomotion skills in humanoid robots. Notably, it emphasizes zero-shot transfer, enabling skills learned in simulation to be directly applied to real-world environments without additional adjustments.
51a7d732d608fe17a24ad34da4891d702f1fa5fd
1185
1184
2024-05-22T17:12:31Z
Vrtnis
21
wikitext
text/x-wiki
Humanoid-Gym is an advanced reinforcement learning (RL) framework built on Nvidia Isaac Gym, designed for training locomotion skills in humanoid robots. Notably, it emphasizes zero-shot transfer, enabling skills learned in simulation to be directly applied to real-world environments without additional adjustments.
Humanoid-Gym streamlines the process of training humanoid robots by providing an intuitive and efficient platform. By integrating Nvidia Isaac Gym with MuJoCo, it allows users to test and verify trained policies in various simulation environments. This capability ensures the robustness and versatility of the trained behaviors, facilitating a seamless transition from virtual training to real-world application.
e002f792f4a6ff4d57046537591bc929bd8568ab
1186
1185
2024-05-22T17:15:56Z
Vrtnis
21
wikitext
text/x-wiki
Humanoid-Gym is an advanced reinforcement learning (RL) framework built on Nvidia Isaac Gym, designed for training locomotion skills in humanoid robots. Notably, it emphasizes zero-shot transfer, enabling skills learned in simulation to be directly applied to real-world environments without additional adjustments.
Humanoid-Gym streamlines the process of training humanoid robots by providing an intuitive and efficient platform. By integrating Nvidia Isaac Gym with MuJoCo, it allows users to test and verify trained policies in various simulation environments. This capability ensures the robustness and versatility of the trained behaviors, facilitating a seamless transition from virtual training to real-world application.
[https://github.com/roboterax/humanoid-gym GitHub]
fcf22a9520a67b55c8d592e1f05b85c4e54d4ceb
K-Scale Humanoid Gym
0
276
1187
2024-05-22T17:56:36Z
Vrtnis
21
Created page with "K-Scale has an updated fork of [[Humanoid Gym]] available at . One of the significant changes involved modifying how the framework handles the initialization of simulation d..."
wikitext
text/x-wiki
K-Scale has an updated fork of [[Humanoid Gym]] available at .
One of the significant changes involved modifying how the framework handles the initialization of simulation data. Previously, the framework used a fixed dimension for reshaping tensors, y.
a60274c3bc3057f2793dcd50eb5aa75ab435ce97
1188
1187
2024-05-22T18:03:32Z
Vrtnis
21
/* Adding framework changes */
wikitext
text/x-wiki
K-Scale has an updated fork of [[Humanoid Gym]] available at .
One of the significant changes involved modifying how the framework handles the initialization of simulation data. Previously, the framework used a fixed dimension for reshaping tensors, which limited its flexibility in handling different numbers of bodies in the simulation. The recent update adjusted this process to allow for any number of bodies, thereby improving the framework's ability to manage various simulation scenarios more effectively.
8e041ab68f69f01f1caff3bd9cb1bc668131d120
1189
1188
2024-05-22T18:04:49Z
Vrtnis
21
/*add gh url*/
wikitext
text/x-wiki
K-Scale has an updated fork of [[Humanoid Gym]] available at this [https://github.com/kscalelabs/humanoid-gym GitHub repository].
One of the significant changes involved modifying how the framework handles the initialization of simulation data. Previously, the framework used a fixed dimension for reshaping tensors, which limited its flexibility in handling different numbers of bodies in the simulation. The recent update adjusted this process to allow for any number of bodies, thereby improving the framework's ability to manage various simulation scenarios more effectively.
f12efa63ab171433d61456e839061a87e825c85d
1190
1189
2024-05-22T18:06:47Z
Vrtnis
21
/*added graphics handling changes*/
wikitext
text/x-wiki
K-Scale has an updated fork of [[Humanoid Gym]] available at this [https://github.com/kscalelabs/humanoid-gym GitHub repository].
One of the significant changes involved modifying how the framework handles the initialization of simulation data. Previously, the framework used a fixed dimension for reshaping tensors, which limited its flexibility in handling different numbers of bodies in the simulation. The recent update adjusted this process to allow for any number of bodies, thereby improving the framework's ability to manage various simulation scenarios more effectively.
Another important enhancement was the optimization of graphics device handling, especially in headless mode where rendering is not required. The update introduced a conditional setting that disables the graphics device when the simulation runs in headless mode. This change helps improve performance by avoiding unnecessary rendering operations, making the framework more efficient for large-scale simulations where visual output is not needed.
1072d89d4a8fa9df0c78ff569d4e97b725b56c03
1191
1190
2024-05-22T18:12:54Z
Vrtnis
21
/* add descriptive titles*/
wikitext
text/x-wiki
K-Scale has an updated fork of [[Humanoid Gym]] available at this [https://github.com/kscalelabs/humanoid-gym GitHub repository].
==== Updates to initialization of simulation data ====
One of the significant changes involved modifying how the framework handles the initialization of simulation data. Previously, the framework used a fixed dimension for reshaping tensors, which limited its flexibility in handling different numbers of bodies in the simulation. The recent update adjusted this process to allow for any number of bodies, thereby improving the framework's ability to manage various simulation scenarios more effectively.
==== Optimization of graphics device handling ====
Another important enhancement was the optimization of graphics device handling, especially in headless mode where rendering is not required. The update introduced a conditional setting that disables the graphics device when the simulation runs in headless mode. This change helps improve performance by avoiding unnecessary rendering operations, making the framework more efficient for large-scale simulations where visual output is not needed.
43625a6a5e8fb5c4fb0320323c20c4f93360e7be
K-Scale Sim Library
0
277
1192
2024-05-22T18:56:55Z
Vrtnis
21
Created page with "= K-Scale Sim Library = A library for simulating Stompy in Isaac Gym. This library is built on top of the Isaac Gym library and Humanoid-gym and provides a simple interface fo..."
wikitext
text/x-wiki
= K-Scale Sim Library =
A library for simulating Stompy in Isaac Gym. This library is built on top of the Isaac Gym library and Humanoid-gym and provides a simple interface for running experiments with Stompy. For a start, we have defined two tasks: getting up and walking.
6549b764457d43567a2d448eb3a16338c6d314c8
1193
1192
2024-05-22T18:57:15Z
Vrtnis
21
/* K-Scale Sim Library */
wikitext
text/x-wiki
A library for simulating Stompy in Isaac Gym. This library is built on top of the Isaac Gym library and Humanoid-gym and provides a simple interface for running experiments with Stompy. For a start, we have defined two tasks: getting up and walking.
0b8dee7fb0c3b4ceb5d0284ea3bdf6da1021bc13
1194
1193
2024-05-22T18:58:20Z
Vrtnis
21
wikitext
text/x-wiki
A library for simulating Stompy in Isaac Gym. This library is built on top of the Isaac Gym library and Humanoid-gym and provides a simple interface for running experiments with Stompy. For a start, we have defined two tasks: getting up and walking.
We will be adding more tasks and simulator environments in upcoming weeks.
The walking task works reliably with the upper body being fixed. The getting up task is still an open challenge!
07d9bcfed2a7adad55f2e78e20d09a1afa17ccc2
1195
1194
2024-05-22T18:59:03Z
Vrtnis
21
wikitext
text/x-wiki
A library for simulating Stompy in Isaac Gym. This library is built on top of the Isaac Gym library and Humanoid-gym and provides a simple interface for running experiments with Stompy. For a start, we have defined two tasks: getting up and walking.
We will be adding more tasks and simulator environments in upcoming weeks.
The walking task works reliably with the upper body being fixed. The getting up task is still an open challenge!
== Getting Started ==
This repository requires Python 3.8 due to compatibility issues with underlying libraries. We hope to support more recent Python versions in the future.
afcd3ac099764ed2fa5792523672d7a0bcc2aca2
1196
1195
2024-05-22T19:00:26Z
Vrtnis
21
wikitext
text/x-wiki
A library for simulating Stompy in Isaac Gym. This library is built on top of the Isaac Gym library and Humanoid-gym and provides a simple interface for running experiments with Stompy. For a start, we have defined two tasks: getting up and walking.
We will be adding more tasks and simulator environments in upcoming weeks.
The walking task works reliably with the upper body being fixed. The getting up task is still an open challenge!
== Getting Started ==
This repository requires Python 3.8 due to compatibility issues with underlying libraries. We hope to support more recent Python versions in the future.
Clone this repository:
<pre>
git clone https://github.com/kscalelabs/sim.git
cd sim
</pre>
Create a new conda environment and install the package:
<pre>
conda create --name kscale-sim-library python=3.8.19
conda activate kscale-sim-library
make install-dev
</pre>
df8d2e96351e6e1b1cfa3705f4f8859d1a35f90b
1197
1196
2024-05-22T19:01:09Z
Vrtnis
21
wikitext
text/x-wiki
A library for simulating Stompy in Isaac Gym. This library is built on top of the Isaac Gym library and Humanoid-gym and provides a simple interface for running experiments with Stompy. For a start, we have defined two tasks: getting up and walking.
We will be adding more tasks and simulator environments in upcoming weeks.
The walking task works reliably with the upper body being fixed. The getting up task is still an open challenge!
== Getting Started ==
This repository requires Python 3.8 due to compatibility issues with underlying libraries. We hope to support more recent Python versions in the future.
Clone this repository:
<pre>
git clone https://github.com/kscalelabs/sim.git
cd sim
</pre>
Create a new conda environment and install the package:
<pre>
conda create --name kscale-sim-library python=3.8.19
conda activate kscale-sim-library
make install-dev
</pre>
Install third-party dependencies:
Manually download IsaacGym_Preview_4_Package.tar.gz from [https://developer.nvidia.com/isaac-gym], and run:
<pre>
tar -xvf IsaacGym_Preview_4_Package.tar.gz
conda env config vars set ISAACGYM_PATH=`pwd`/isaacgym
conda deactivate
conda activate kscale-sim-library
make install-third-party-external
</pre>
796825471afc45fb7051b90134766358f8e11918
1198
1197
2024-05-22T19:02:56Z
Vrtnis
21
/*Stompy expertiments setup*/
wikitext
text/x-wiki
A library for simulating Stompy in Isaac Gym. This library is built on top of the Isaac Gym library and Humanoid-gym and provides a simple interface for running experiments with Stompy. For a start, we have defined two tasks: getting up and walking.
We will be adding more tasks and simulator environments in upcoming weeks.
The walking task works reliably with the upper body being fixed. The getting up task is still an open challenge!
== Getting Started ==
This repository requires Python 3.8 due to compatibility issues with underlying libraries. We hope to support more recent Python versions in the future.
Clone this repository:
<pre>
git clone https://github.com/kscalelabs/sim.git
cd sim
</pre>
Create a new conda environment and install the package:
<pre>
conda create --name kscale-sim-library python=3.8.19
conda activate kscale-sim-library
make install-dev
</pre>
Install third-party dependencies:
Manually download IsaacGym_Preview_4_Package.tar.gz from [https://developer.nvidia.com/isaac-gym], and run:
<pre>
tar -xvf IsaacGym_Preview_4_Package.tar.gz
conda env config vars set ISAACGYM_PATH=`pwd`/isaacgym
conda deactivate
conda activate kscale-sim-library
make install-third-party-external
</pre>
== Running Stompy experiments ==
Download our URDF model from here:
<pre>
wget https://media.kscale.dev/stompy/latest_stl_urdf.tar.gz && tar -xzvf latest_stl_urdf.tar.gz
python sim/scripts/create_fixed_torso.py
export MODEL_DIR=stompy
</pre>
Run training with the following command:
<pre>
python sim/humanoid_gym/train.py --task=legs_ppo --num_envs=4096 --headless
</pre>
or for full body:
<pre>
python sim/humanoid_gym/train.py --task=stompy_ppo --num_envs=4096 --headless
</pre>
Run evaluation with the following command:
<pre>
python sim/humanoid_gym/play.py --task legs_ppo --sim_device cpu
</pre>
See this doc for more beginner tips.
23d742d9f7b2fce00d9438128e9d9776440617c2
1199
1198
2024-05-22T19:03:45Z
Vrtnis
21
wikitext
text/x-wiki
A library for simulating Stompy in Isaac Gym. This library is built on top of the Isaac Gym library and Humanoid-gym and provides a simple interface for running experiments with Stompy. For a start, we have defined two tasks: getting up and walking.
We will be adding more tasks and simulator environments in upcoming weeks.
The walking task works reliably with the upper body being fixed. The getting up task is still an open challenge!
== Getting Started ==
This repository requires Python 3.8 due to compatibility issues with underlying libraries. We hope to support more recent Python versions in the future.
Clone this repository:
<pre>
git clone https://github.com/kscalelabs/sim.git
cd sim
</pre>
Create a new conda environment and install the package:
<pre>
conda create --name kscale-sim-library python=3.8.19
conda activate kscale-sim-library
make install-dev
</pre>
Install third-party dependencies:
Manually download IsaacGym_Preview_4_Package.tar.gz from [https://developer.nvidia.com/isaac-gym], and run:
<pre>
tar -xvf IsaacGym_Preview_4_Package.tar.gz
conda env config vars set ISAACGYM_PATH=`pwd`/isaacgym
conda deactivate
conda activate kscale-sim-library
make install-third-party-external
</pre>
== Running Stompy experiments ==
Download our URDF model from here:
<pre>
wget https://media.kscale.dev/stompy/latest_stl_urdf.tar.gz && tar -xzvf latest_stl_urdf.tar.gz
python sim/scripts/create_fixed_torso.py
export MODEL_DIR=stompy
</pre>
Run training with the following command:
<pre>
python sim/humanoid_gym/train.py --task=legs_ppo --num_envs=4096 --headless
</pre>
or for full body:
<pre>
python sim/humanoid_gym/train.py --task=stompy_ppo --num_envs=4096 --headless
</pre>
Run evaluation with the following command:
<pre>
python sim/humanoid_gym/play.py --task legs_ppo --sim_device cpu
</pre>
See this doc for more beginner tips.
== Errors ==
After cloning Isaac Gym, sometimes the bindings mysteriously disappear. To fix this, update the submodule:
<pre>
git submodule update --init --recursive
</pre>
If you observe errors with libpython3.8.so.1.0, you can try the following:
<pre>
export LD_LIBRARY_PATH=PATH_TO_YOUR_ENV/lib:$LD_LIBRARY_PATH
</pre>
If you still see segmentation faults, you can try the following:
<pre>
sudo apt-get install vulkan1
</pre>
77f03eaa2f7774829f00c8e9c7426e3c973aca17
1200
1199
2024-05-22T19:04:02Z
Vrtnis
21
/* Errors */
wikitext
text/x-wiki
A library for simulating Stompy in Isaac Gym. This library is built on top of the Isaac Gym library and Humanoid-gym and provides a simple interface for running experiments with Stompy. For a start, we have defined two tasks: getting up and walking.
We will be adding more tasks and simulator environments in upcoming weeks.
The walking task works reliably with the upper body being fixed. The getting up task is still an open challenge!
== Getting Started ==
This repository requires Python 3.8 due to compatibility issues with underlying libraries. We hope to support more recent Python versions in the future.
Clone this repository:
<pre>
git clone https://github.com/kscalelabs/sim.git
cd sim
</pre>
Create a new conda environment and install the package:
<pre>
conda create --name kscale-sim-library python=3.8.19
conda activate kscale-sim-library
make install-dev
</pre>
Install third-party dependencies:
Manually download IsaacGym_Preview_4_Package.tar.gz from [https://developer.nvidia.com/isaac-gym], and run:
<pre>
tar -xvf IsaacGym_Preview_4_Package.tar.gz
conda env config vars set ISAACGYM_PATH=`pwd`/isaacgym
conda deactivate
conda activate kscale-sim-library
make install-third-party-external
</pre>
== Running Stompy experiments ==
Download our URDF model from here:
<pre>
wget https://media.kscale.dev/stompy/latest_stl_urdf.tar.gz && tar -xzvf latest_stl_urdf.tar.gz
python sim/scripts/create_fixed_torso.py
export MODEL_DIR=stompy
</pre>
Run training with the following command:
<pre>
python sim/humanoid_gym/train.py --task=legs_ppo --num_envs=4096 --headless
</pre>
or for full body:
<pre>
python sim/humanoid_gym/train.py --task=stompy_ppo --num_envs=4096 --headless
</pre>
Run evaluation with the following command:
<pre>
python sim/humanoid_gym/play.py --task legs_ppo --sim_device cpu
</pre>
See this doc for more beginner tips.
== Handling Errors ==
After cloning Isaac Gym, sometimes the bindings mysteriously disappear. To fix this, update the submodule:
<pre>
git submodule update --init --recursive
</pre>
If you observe errors with libpython3.8.so.1.0, you can try the following:
<pre>
export LD_LIBRARY_PATH=PATH_TO_YOUR_ENV/lib:$LD_LIBRARY_PATH
</pre>
If you still see segmentation faults, you can try the following:
<pre>
sudo apt-get install vulkan1
</pre>
f95c55665d1eacb8b1e91f924a7b240e58ca2dee
1201
1200
2024-05-22T19:08:27Z
Vrtnis
21
wikitext
text/x-wiki
A library for simulating Stompy in Isaac Gym. This library is built on top of the Isaac Gym library and Humanoid-gym and provides a simple interface for running experiments with Stompy. For a start, we have defined two tasks: getting up and walking.
We will be adding more tasks and simulator environments in upcoming weeks.
The walking task works reliably with the upper body being fixed. The getting up task is still an open challenge!
== Getting Started ==
This repository requires Python 3.8 due to compatibility issues with underlying libraries. We hope to support more recent Python versions in the future.
Clone this repository:
<pre>
git clone https://github.com/kscalelabs/sim.git
cd sim
</pre>
Create a new conda environment and install the package:
<pre>
conda create --name kscale-sim-library python=3.8.19
conda activate kscale-sim-library
make install-dev
</pre>
Install third-party dependencies:
Manually download IsaacGym_Preview_4_Package.tar.gz from [https://developer.nvidia.com/isaac-gym], and run:
<pre>
tar -xvf IsaacGym_Preview_4_Package.tar.gz
conda env config vars set ISAACGYM_PATH=`pwd`/isaacgym
conda deactivate
conda activate kscale-sim-library
make install-third-party-external
</pre>
== Running Stompy experiments ==
Download our URDF model from here:
<pre>
wget https://media.kscale.dev/stompy/latest_stl_urdf.tar.gz && tar -xzvf latest_stl_urdf.tar.gz
python sim/scripts/create_fixed_torso.py
export MODEL_DIR=stompy
</pre>
Run training with the following command:
<pre>
python sim/humanoid_gym/train.py --task=legs_ppo --num_envs=4096 --headless
</pre>
or for full body:
<pre>
python sim/humanoid_gym/train.py --task=stompy_ppo --num_envs=4096 --headless
</pre>
Run evaluation with the following command:
<pre>
python sim/humanoid_gym/play.py --task legs_ppo --sim_device cpu
</pre>
See this doc for more beginner tips.
== Handling Errors ==
After cloning Isaac Gym, sometimes the bindings mysteriously disappear. To fix this, update the submodule:
<pre>
git submodule update --init --recursive
</pre>
If you observe errors with libpython3.8.so.1.0, you can try the following:
<pre>
export LD_LIBRARY_PATH=PATH_TO_YOUR_ENV/lib:$LD_LIBRARY_PATH
</pre>
If you still see segmentation faults, you can try the following:
<pre>
sudo apt-get install vulkan1
</pre>
Also, see [[Humanoid Gym]] and [[K-Scale Humanoid Gym]]
560077b140756f9894a60825c146ca9783f5673f
1202
1201
2024-05-22T20:16:23Z
Vrtnis
21
/*add wiki-links*/
wikitext
text/x-wiki
A library for simulating [[Stompy]] in [[Isaac Gym]]. This library is built on top of the Isaac Gym library and Humanoid-gym and provides a simple interface for running experiments with Stompy. For a start, we have defined two tasks: getting up and walking.
We will be adding more tasks and simulator environments in upcoming weeks.
The walking task works reliably with the upper body being fixed. The getting up task is still an open challenge!
== Getting Started ==
This repository requires Python 3.8 due to compatibility issues with underlying libraries. We hope to support more recent Python versions in the future.
Clone this repository:
<pre>
git clone https://github.com/kscalelabs/sim.git
cd sim
</pre>
Create a new conda environment and install the package:
<pre>
conda create --name kscale-sim-library python=3.8.19
conda activate kscale-sim-library
make install-dev
</pre>
Install third-party dependencies:
Manually download IsaacGym_Preview_4_Package.tar.gz from [https://developer.nvidia.com/isaac-gym], and run:
<pre>
tar -xvf IsaacGym_Preview_4_Package.tar.gz
conda env config vars set ISAACGYM_PATH=`pwd`/isaacgym
conda deactivate
conda activate kscale-sim-library
make install-third-party-external
</pre>
== Running Stompy experiments ==
Download our URDF model from here:
<pre>
wget https://media.kscale.dev/stompy/latest_stl_urdf.tar.gz && tar -xzvf latest_stl_urdf.tar.gz
python sim/scripts/create_fixed_torso.py
export MODEL_DIR=stompy
</pre>
Run training with the following command:
<pre>
python sim/humanoid_gym/train.py --task=legs_ppo --num_envs=4096 --headless
</pre>
or for full body:
<pre>
python sim/humanoid_gym/train.py --task=stompy_ppo --num_envs=4096 --headless
</pre>
Run evaluation with the following command:
<pre>
python sim/humanoid_gym/play.py --task legs_ppo --sim_device cpu
</pre>
See this doc for more beginner tips.
== Handling Errors ==
After cloning Isaac Gym, sometimes the bindings mysteriously disappear. To fix this, update the submodule:
<pre>
git submodule update --init --recursive
</pre>
If you observe errors with libpython3.8.so.1.0, you can try the following:
<pre>
export LD_LIBRARY_PATH=PATH_TO_YOUR_ENV/lib:$LD_LIBRARY_PATH
</pre>
If you still see segmentation faults, you can try the following:
<pre>
sudo apt-get install vulkan1
</pre>
Also, see [[Humanoid Gym]] and [[K-Scale Humanoid Gym]]
0048eb87fd1ae938652db8e91eab7ee73b39f2b1
1203
1202
2024-05-22T20:18:41Z
Vrtnis
21
wikitext
text/x-wiki
A library for simulating [[Stompy]] in Isaac Gym. This library is built on top of the Isaac Gym library and Humanoid-gym and provides a simple interface for running experiments with Stompy. For a start, we have defined two tasks: getting up and walking.
The library is available at [https://github.com/kscalelabs/sim https://github.com/kscalelabs/sim]
We will be adding more tasks and simulator environments in upcoming weeks.
The walking task works reliably with the upper body being fixed. The getting up task is still an open challenge!
== Getting Started ==
This repository requires Python 3.8 due to compatibility issues with underlying libraries. We hope to support more recent Python versions in the future.
Clone this repository:
<pre>
git clone https://github.com/kscalelabs/sim.git
cd sim
</pre>
Create a new conda environment and install the package:
<pre>
conda create --name kscale-sim-library python=3.8.19
conda activate kscale-sim-library
make install-dev
</pre>
Install third-party dependencies:
Manually download IsaacGym_Preview_4_Package.tar.gz from [https://developer.nvidia.com/isaac-gym], and run:
<pre>
tar -xvf IsaacGym_Preview_4_Package.tar.gz
conda env config vars set ISAACGYM_PATH=`pwd`/isaacgym
conda deactivate
conda activate kscale-sim-library
make install-third-party-external
</pre>
== Running Stompy experiments ==
Download our URDF model from here:
<pre>
wget https://media.kscale.dev/stompy/latest_stl_urdf.tar.gz && tar -xzvf latest_stl_urdf.tar.gz
python sim/scripts/create_fixed_torso.py
export MODEL_DIR=stompy
</pre>
Run training with the following command:
<pre>
python sim/humanoid_gym/train.py --task=legs_ppo --num_envs=4096 --headless
</pre>
or for full body:
<pre>
python sim/humanoid_gym/train.py --task=stompy_ppo --num_envs=4096 --headless
</pre>
Run evaluation with the following command:
<pre>
python sim/humanoid_gym/play.py --task legs_ppo --sim_device cpu
</pre>
See this doc for more beginner tips.
== Handling Errors ==
After cloning Isaac Gym, sometimes the bindings mysteriously disappear. To fix this, update the submodule:
<pre>
git submodule update --init --recursive
</pre>
If you observe errors with libpython3.8.so.1.0, you can try the following:
<pre>
export LD_LIBRARY_PATH=PATH_TO_YOUR_ENV/lib:$LD_LIBRARY_PATH
</pre>
If you still see segmentation faults, you can try the following:
<pre>
sudo apt-get install vulkan1
</pre>
Also, see [[Humanoid Gym]] and [[K-Scale Humanoid Gym]]
0c72f0f1fb0bff58e5d7116b9e44f633169f211c
K-Scale Teleop
0
278
1204
2024-05-22T20:31:29Z
Vrtnis
21
Created page with "== Bi-Manual Remote Robotic Teleoperation == A minimal implementation of a bi-manual remote robotic teleoperation system using VR hand tracking and camera streaming."
wikitext
text/x-wiki
== Bi-Manual Remote Robotic Teleoperation ==
A minimal implementation of a bi-manual remote robotic teleoperation system using VR hand tracking and camera streaming.
fa92490a7759537255b0912659a41e74c0bcd81e
1205
1204
2024-05-22T20:31:40Z
Vrtnis
21
/* Bi-Manual Remote Robotic Teleoperation */
wikitext
text/x-wiki
A minimal implementation of a bi-manual remote robotic teleoperation system using VR hand tracking and camera streaming.
c039c8900da4bba16386935311c39f15d295f7d1
1206
1205
2024-05-22T20:33:33Z
Vrtnis
21
wikitext
text/x-wiki
A minimal implementation of a bi-manual remote robotic teleoperation system using VR hand tracking and camera streaming.
{| class="wikitable"
! Feature
! Status
|-
| VR and browser visualization
| ✔ Completed
|-
| Bi-manual hand gesture control
| ✔ Completed
|-
| Camera streaming (mono + stereo)
| ✔ Completed
|-
| Inverse kinematics
| ✔ Completed
|-
| Meta Quest Pro HMD + NVIDIA® Jetson AGX Orin™ Developer Kit
| ✔ Completed
|-
| .urdf robot model
| ✔ Completed
|-
| 3dof end effector control
| ✔ Completed
|-
| Debug 6dof end effector control
| ⬜ Pending
|-
| Resets to various default poses
| ⬜ Pending
|-
| Tested on real world robot
| ⬜ Pending
|-
| Record & playback trajectories
| ⬜ Pending
|}
33fb743762f4816485d9e7d45f4939f9855c8e0d
1207
1206
2024-05-22T20:34:58Z
Vrtnis
21
wikitext
text/x-wiki
A minimal implementation of a bi-manual remote robotic teleoperation system using VR hand tracking and camera streaming.
{| class="wikitable"
! Feature
! Status
|-
| VR and browser visualization
| ✔ Completed
|-
| Bi-manual hand gesture control
| ✔ Completed
|-
| Camera streaming (mono + stereo)
| ✔ Completed
|-
| Inverse kinematics
| ✔ Completed
|-
| Meta Quest Pro HMD + NVIDIA® Jetson AGX Orin™ Developer Kit
| ✔ Completed
|-
| .urdf robot model
| ✔ Completed
|-
| 3dof end effector control
| ✔ Completed
|-
| Debug 6dof end effector control
| ⬜ Pending
|-
| Resets to various default poses
| ⬜ Pending
|-
| Tested on real world robot
| ⬜ Pending
|-
| Record & playback trajectories
| ⬜ Pending
|}
=== Setup ===
<pre>
git clone https://github.com/kscalelabs/teleop.git && cd teleop
conda create -y -n teleop python=3.8 && conda activate teleop
pip install -r requirements.txt
</pre>
=== Usage ===
* Start the server on the robot computer:
<pre>python demo_hands_stereo_ik3dof.py</pre>
* Start ngrok on the robot computer:
<pre>ngrok http 8012</pre>
* Open the browser app on the HMD and go to the ngrok URL.
=== Dependencies ===
* Vuer is used for visualization
* PyBullet is used for inverse kinematics
* ngrok is used for networking
=== Citation ===
<pre>
@misc{teleop-2024,
title={Bi-Manual Remote Robotic Teleoperation},
author={Hugo Ponte},
year={2024},
url={https://github.com/kscalelabs/teleop}
}
</pre>
9fcaf48a5af437ebfb8418f489fe2a916fabeea1
K-Scale Teleop
0
278
1208
1207
2024-05-22T20:40:58Z
Vrtnis
21
Vrtnis moved page [[K-Scale teleop]] to [[K-Scale Teleop]]: Correct title capitalization
wikitext
text/x-wiki
A minimal implementation of a bi-manual remote robotic teleoperation system using VR hand tracking and camera streaming.
{| class="wikitable"
! Feature
! Status
|-
| VR and browser visualization
| ✔ Completed
|-
| Bi-manual hand gesture control
| ✔ Completed
|-
| Camera streaming (mono + stereo)
| ✔ Completed
|-
| Inverse kinematics
| ✔ Completed
|-
| Meta Quest Pro HMD + NVIDIA® Jetson AGX Orin™ Developer Kit
| ✔ Completed
|-
| .urdf robot model
| ✔ Completed
|-
| 3dof end effector control
| ✔ Completed
|-
| Debug 6dof end effector control
| ⬜ Pending
|-
| Resets to various default poses
| ⬜ Pending
|-
| Tested on real world robot
| ⬜ Pending
|-
| Record & playback trajectories
| ⬜ Pending
|}
=== Setup ===
<pre>
git clone https://github.com/kscalelabs/teleop.git && cd teleop
conda create -y -n teleop python=3.8 && conda activate teleop
pip install -r requirements.txt
</pre>
=== Usage ===
* Start the server on the robot computer:
<pre>python demo_hands_stereo_ik3dof.py</pre>
* Start ngrok on the robot computer:
<pre>ngrok http 8012</pre>
* Open the browser app on the HMD and go to the ngrok URL.
=== Dependencies ===
* Vuer is used for visualization
* PyBullet is used for inverse kinematics
* ngrok is used for networking
=== Citation ===
<pre>
@misc{teleop-2024,
title={Bi-Manual Remote Robotic Teleoperation},
author={Hugo Ponte},
year={2024},
url={https://github.com/kscalelabs/teleop}
}
</pre>
9fcaf48a5af437ebfb8418f489fe2a916fabeea1
K-Scale teleop
0
279
1209
2024-05-22T20:40:58Z
Vrtnis
21
Vrtnis moved page [[K-Scale teleop]] to [[K-Scale Teleop]]: Correct title capitalization
wikitext
text/x-wiki
#REDIRECT [[K-Scale Teleop]]
214fcb97c3e517152059dec71c45f1a5f75d7a80
K-Scale Manipulation Suite
0
280
1210
2024-05-22T20:43:18Z
Vrtnis
21
Created page with "== Setup - Linux == '''Clone and install dependencies''' <pre> git clone https://github.com/kscalelabs/gym-kmanip.git && cd gym-kmanip conda create -y -n gym-kmanip python=3.1..."
wikitext
text/x-wiki
== Setup - Linux ==
'''Clone and install dependencies'''
<pre>
git clone https://github.com/kscalelabs/gym-kmanip.git && cd gym-kmanip
conda create -y -n gym-kmanip python=3.10 && conda activate gym-kmanip
pip install -e .
</pre>
'''Run tests'''
<pre>
pip install pytest
pytest tests/test_env.py
</pre>
8c36c9a3b242b7fc630e2bbe089e76cc59dab77f
1211
1210
2024-05-22T20:44:44Z
Vrtnis
21
wikitext
text/x-wiki
== Setup - Linux ==
'''Clone and install dependencies'''
<pre>
git clone https://github.com/kscalelabs/gym-kmanip.git && cd gym-kmanip
conda create -y -n gym-kmanip python=3.10 && conda activate gym-kmanip
pip install -e .
</pre>
'''Run tests'''
<pre>
pip install pytest
pytest tests/test_env.py
</pre>
== Setup - Jetson Orin AGX ==
'''No conda on ARM64, install on bare metal'''
<pre>
sudo apt-get install libhdf5-dev
git clone https://github.com/kscalelabs/gym-kmanip.git && cd gym-kmanip
pip install -e .
</pre>
7f044f04386d1d5efe84d47c4ec3b9a561ebbec9
1212
1211
2024-05-22T20:47:43Z
Vrtnis
21
wikitext
text/x-wiki
== Setup - Linux ==
'''Clone and install dependencies'''
<pre>
git clone https://github.com/kscalelabs/gym-kmanip.git && cd gym-kmanip
conda create -y -n gym-kmanip python=3.10 && conda activate gym-kmanip
pip install -e .
</pre>
'''Run tests'''
<pre>
pip install pytest
pytest tests/test_env.py
</pre>
== Setup - Jetson Orin AGX ==
'''No conda on ARM64, install on bare metal'''
<pre>
sudo apt-get install libhdf5-dev
git clone https://github.com/kscalelabs/gym-kmanip.git && cd gym-kmanip
pip install -e .
</pre>
== Usage - Basic ==
'''Visualize the MuJoCo scene'''
<pre>
python gym_kmanip/examples/1_view_env.py
</pre>
'''Record a video of the MuJoCo scene'''
<pre>
python gym_kmanip/examples/2_record_video.py
</pre>
== Usage - Recording Data ==
'''K-Scale HuggingFace Datasets'''
'''Data is recorded via teleop, this requires additional dependencies'''
<pre>
pip install opencv-python==4.9.0.80
pip install vuer==0.0.30
pip install rerun-sdk==0.16.0
</pre>
'''Start the server on the robot computer'''
<pre>
python gym_kmanip/examples/4_record_data_teleop.py
</pre>
'''Start ngrok on the robot computer'''
<pre>
ngrok http 8012
</pre>
'''Open the browser app on the VR headset and go to the ngrok URL'''
715b688e127b9bd990789b006a78851fa4f9a925
1213
1212
2024-05-22T20:48:12Z
Vrtnis
21
wikitext
text/x-wiki
== Setup - Linux ==
'''Clone and install dependencies'''
<pre>
git clone https://github.com/kscalelabs/gym-kmanip.git && cd gym-kmanip
conda create -y -n gym-kmanip python=3.10 && conda activate gym-kmanip
pip install -e .
</pre>
'''Run tests'''
<pre>
pip install pytest
pytest tests/test_env.py
</pre>
== Setup - Jetson Orin AGX ==
'''No conda on ARM64, install on bare metal'''
<pre>
sudo apt-get install libhdf5-dev
git clone https://github.com/kscalelabs/gym-kmanip.git && cd gym-kmanip
pip install -e .
</pre>
== Usage - Basic ==
'''Visualize the MuJoCo scene'''
<pre>
python gym_kmanip/examples/1_view_env.py
</pre>
'''Record a video of the MuJoCo scene'''
<pre>
python gym_kmanip/examples/2_record_video.py
</pre>
== Usage - Recording Data ==
'''K-Scale HuggingFace Datasets'''
'''Data is recorded via teleop, this requires additional dependencies'''
<pre>
pip install opencv-python==4.9.0.80
pip install vuer==0.0.30
pip install rerun-sdk==0.16.0
</pre>
'''Start the server on the robot computer'''
<pre>
python gym_kmanip/examples/4_record_data_teleop.py
</pre>
'''Start ngrok on the robot computer'''
<pre>
ngrok http 8012
</pre>
'''Open the browser app on the VR headset and go to the ngrok URL'''
== Usage - Visualizing Data ==
'''Data is visualized using rerun'''
<pre>
rerun gym_kmanip/data/test.rrd
</pre>
== Usage - MuJoCo Sim Visualizer ==
'''MuJoCo provides a nice visualizer where you can directly control the robot'''
'''Download standalone MuJoCo'''
<pre>
tar -xzf ~/Downloads/mujoco-3.1.5-linux-x86_64.tar.gz -C /path/to/mujoco-3.1.5
</pre>
'''Run the simulator'''
<pre>
/path/to/mujoco-3.1.5/bin/simulate gym_kmanip/assets/_env_solo_arm.xml
</pre>
792c28c85f7b9acafa137b7f38eefb915a5e5164
1214
1213
2024-05-22T20:49:57Z
Vrtnis
21
wikitext
text/x-wiki
== Setup - Linux ==
'''Clone and install dependencies'''
<pre>
git clone https://github.com/kscalelabs/gym-kmanip.git && cd gym-kmanip
conda create -y -n gym-kmanip python=3.10 && conda activate gym-kmanip
pip install -e .
</pre>
'''Run tests'''
<pre>
pip install pytest
pytest tests/test_env.py
</pre>
== Setup - Jetson Orin AGX ==
'''No conda on ARM64, install on bare metal'''
<pre>
sudo apt-get install libhdf5-dev
git clone https://github.com/kscalelabs/gym-kmanip.git && cd gym-kmanip
pip install -e .
</pre>
== Usage - Basic ==
'''Visualize the MuJoCo scene'''
<pre>
python gym_kmanip/examples/1_view_env.py
</pre>
'''Record a video of the MuJoCo scene'''
<pre>
python gym_kmanip/examples/2_record_video.py
</pre>
== Usage - Recording Data ==
'''K-Scale HuggingFace Datasets'''
'''Data is recorded via teleop, this requires additional dependencies'''
<pre>
pip install opencv-python==4.9.0.80
pip install vuer==0.0.30
pip install rerun-sdk==0.16.0
</pre>
'''Start the server on the robot computer'''
<pre>
python gym_kmanip/examples/4_record_data_teleop.py
</pre>
'''Start ngrok on the robot computer'''
<pre>
ngrok http 8012
</pre>
'''Open the browser app on the VR headset and go to the ngrok URL'''
== Usage - Visualizing Data ==
'''Data is visualized using rerun'''
<pre>
rerun gym_kmanip/data/test.rrd
</pre>
== Usage - MuJoCo Sim Visualizer ==
'''MuJoCo provides a nice visualizer where you can directly control the robot'''
'''Download standalone MuJoCo'''
<pre>
tar -xzf ~/Downloads/mujoco-3.1.5-linux-x86_64.tar.gz -C /path/to/mujoco-3.1.5
</pre>
'''Run the simulator'''
<pre>
/path/to/mujoco-3.1.5/bin/simulate gym_kmanip/assets/_env_solo_arm.xml
</pre>
<pre>
Citation:
@misc{teleop-2024,
title={gym-kmanip},
author={Hugo Ponte},
year={2024},
url={https://github.com/kscalelabs/gym-kmanip}
}
</pre>
097728d67ba52ff1f328990947267b7dbcb4bf17
1215
1214
2024-05-22T20:50:31Z
Vrtnis
21
wikitext
text/x-wiki
== Setup - Linux ==
'''Clone and install dependencies'''
<pre>
git clone https://github.com/kscalelabs/gym-kmanip.git && cd gym-kmanip
conda create -y -n gym-kmanip python=3.10 && conda activate gym-kmanip
pip install -e .
</pre>
'''Run tests'''
<pre>
pip install pytest
pytest tests/test_env.py
</pre>
== Setup - Jetson Orin AGX ==
'''No conda on ARM64, install on bare metal'''
<pre>
sudo apt-get install libhdf5-dev
git clone https://github.com/kscalelabs/gym-kmanip.git && cd gym-kmanip
pip install -e .
</pre>
== Usage - Basic ==
'''Visualize the MuJoCo scene'''
<pre>
python gym_kmanip/examples/1_view_env.py
</pre>
'''Record a video of the MuJoCo scene'''
<pre>
python gym_kmanip/examples/2_record_video.py
</pre>
== Usage - Recording Data ==
'''K-Scale HuggingFace Datasets'''
'''Data is recorded via teleop, this requires additional dependencies'''
<pre>
pip install opencv-python==4.9.0.80
pip install vuer==0.0.30
pip install rerun-sdk==0.16.0
</pre>
'''Start the server on the robot computer'''
<pre>
python gym_kmanip/examples/4_record_data_teleop.py
</pre>
'''Start ngrok on the robot computer'''
<pre>
ngrok http 8012
</pre>
'''Open the browser app on the VR headset and go to the ngrok URL'''
== Usage - Visualizing Data ==
'''Data is visualized using rerun'''
<pre>
rerun gym_kmanip/data/test.rrd
</pre>
== Usage - MuJoCo Sim Visualizer ==
'''MuJoCo provides a nice visualizer where you can directly control the robot'''
'''Download standalone MuJoCo'''
<pre>
tar -xzf ~/Downloads/mujoco-3.1.5-linux-x86_64.tar.gz -C /path/to/mujoco-3.1.5
</pre>
'''Run the simulator'''
<pre>
/path/to/mujoco-3.1.5/bin/simulate gym_kmanip/assets/_env_solo_arm.xml
</pre>
== Citation ==
<pre>
@misc{teleop-2024,
title={gym-kmanip},
author={Hugo Ponte},
year={2024},
url={https://github.com/kscalelabs/gym-kmanip}
}
</pre>
82af190b19d1fdc37439d0087f4cbf5ff8be9224
1216
1215
2024-05-22T20:51:32Z
Vrtnis
21
wikitext
text/x-wiki
== Setup - Linux ==
[https://github.com/kscalelabs/gym-kmanip https://github.com/kscalelabs/gym-kmanip]
'''Clone and install dependencies'''
<pre>
git clone https://github.com/kscalelabs/gym-kmanip.git && cd gym-kmanip
conda create -y -n gym-kmanip python=3.10 && conda activate gym-kmanip
pip install -e .
</pre>
'''Run tests'''
<pre>
pip install pytest
pytest tests/test_env.py
</pre>
== Setup - Jetson Orin AGX ==
'''No conda on ARM64, install on bare metal'''
<pre>
sudo apt-get install libhdf5-dev
git clone https://github.com/kscalelabs/gym-kmanip.git && cd gym-kmanip
pip install -e .
</pre>
== Usage - Basic ==
'''Visualize the MuJoCo scene'''
<pre>
python gym_kmanip/examples/1_view_env.py
</pre>
'''Record a video of the MuJoCo scene'''
<pre>
python gym_kmanip/examples/2_record_video.py
</pre>
== Usage - Recording Data ==
'''K-Scale HuggingFace Datasets'''
'''Data is recorded via teleop, this requires additional dependencies'''
<pre>
pip install opencv-python==4.9.0.80
pip install vuer==0.0.30
pip install rerun-sdk==0.16.0
</pre>
'''Start the server on the robot computer'''
<pre>
python gym_kmanip/examples/4_record_data_teleop.py
</pre>
'''Start ngrok on the robot computer'''
<pre>
ngrok http 8012
</pre>
'''Open the browser app on the VR headset and go to the ngrok URL'''
== Usage - Visualizing Data ==
'''Data is visualized using rerun'''
<pre>
rerun gym_kmanip/data/test.rrd
</pre>
== Usage - MuJoCo Sim Visualizer ==
'''MuJoCo provides a nice visualizer where you can directly control the robot'''
'''Download standalone MuJoCo'''
<pre>
tar -xzf ~/Downloads/mujoco-3.1.5-linux-x86_64.tar.gz -C /path/to/mujoco-3.1.5
</pre>
'''Run the simulator'''
<pre>
/path/to/mujoco-3.1.5/bin/simulate gym_kmanip/assets/_env_solo_arm.xml
</pre>
== Citation ==
<pre>
@misc{teleop-2024,
title={gym-kmanip},
author={Hugo Ponte},
year={2024},
url={https://github.com/kscalelabs/gym-kmanip}
}
</pre>
21d1535a2c2a1e74c746adaf0b5e1e967b64b5d6
MuJoCo WASM
0
257
1217
1160
2024-05-23T17:01:19Z
Kewang
11
wikitext
text/x-wiki
== Install emscripten ==
First, you need to install emscripten, which is a compiler toolchain for WebAssembly.
=== Get the emsdk repo ===
<code>
git clone https://github.com/emscripten-core/emsdk.git
</code>
=== Enter that directory ===
<code>
cd emsdk
</code>
=== Download and install the latest SDK tools ===
<code>
./emsdk install latest
</code>
=== Make the "latest" SDK "active" ===
<code>
./emsdk activate latest
</code>
=== Activate PATH and other environment variables ===
<code>
source ./emsdk_env.sh
</code>
These variables are set for the current terminal. If you want to make it for all terminals, you can add them to any terminal profile. Here they are:
The environment variables:
<code>
EMSDK = < path to emsdk dir >
EM_CONFIG = ~/.emscripten
EMSDK_NODE = < path to emsdk dir >/node/12.9.1_64bit/bin/node
</code>
=== Now just try it! ===
<code>
emcc
</code>
== Build the mujoco_wasm Binary ==
First git clone
<code> https://github.com/zalo/mujoco_wasm </code>
Next, you'll build the MuJoCo WebAssembly binary.
<syntaxhighlight lang="bash">
mkdir build
cd build
emcmake cmake ..
make
</syntaxhighlight>
[[File:Carbon (1).png|800px|thumb|none|emcmake cmake ..]]
[[File:Carbon (2).png|400px|thumb|none|make]]
'''Tip:''' If you get an error with "undefined symbol: saveSetjmp/testSetjmp" at the build step, revert to:
<code>
./emsdk install 3.1.56 && ./emsdk activate 3.1.56 && source ./emsdk_env.sh
</code>
== Running in Browser ==
Run this in your mujoco folder to start a server.
<code>
python -m http.server 8000
</code>
Then navigate to:
<code>
http://localhost:8000/index.html
</code>
[[File:Wasm screenshot13-40-40.png|800px|thumb|none|MuJoCo running in browser]]
== Running in Cloud/Cluster and Viewing on Local Machine ==
Add extra parameter to your ssh command:
<code>
ssh -L 8000:127.0.0.1:8000 my_name@my_cluster_ip
</code>
Then you can open it on the browser on your local machine!
d7e990e7a8d776a82813813c91664a405ff29177
Robot Descriptions List
0
281
1218
2024-05-23T20:51:06Z
Vrtnis
21
Created page with "=== Educational === {| class="wikitable" |- ! Name !! Formats !! License !! Meshes !! Inertias !! Collisions |- | Double Pendulum || [URDF](https://github.com/Gepetto/example-..."
wikitext
text/x-wiki
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || [URDF](https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description) || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || [URDF](https://github.com/laas/simple_humanoid_description) || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || [URDF](https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description) || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || [URDF](https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description) || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
|}
e5373f7aa5ad10d20d71e50567b4782d6b294ed7
1219
1218
2024-05-23T20:51:15Z
Vrtnis
21
/* Educational */
wikitext
text/x-wiki
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || [URDF](https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description) || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || [URDF](https://github.com/laas/simple_humanoid_description) || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || [URDF](https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description) || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || [URDF](https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description) || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || [URDF](https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf), [MJCF](https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro) || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || [MJCF](https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand) || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || [MJCF](https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85), [URDF](https://github.com/a-price/robotiq_arg85_description), [Xacro](https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization) || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || [URDF](https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots) || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || [SDF](https://github.com/RobotLocomotion/models/tree/master/wsg_50_description) || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
1dfb184837d971ddd6edc56beb6acec9eeeb796a
1220
1219
2024-05-23T21:22:32Z
Vrtnis
21
wikitext
text/x-wiki
** updates in progress **
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || [URDF](https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description) || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || [URDF](https://github.com/laas/simple_humanoid_description) || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || [URDF](https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description) || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || [URDF](https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description) || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || [URDF](https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf), [MJCF](https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro) || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || [MJCF](https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand) || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || [MJCF](https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85), [URDF](https://github.com/a-price/robotiq_arg85_description), [Xacro](https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization) || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || [URDF](https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots) || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || [SDF](https://github.com/RobotLocomotion/models/tree/master/wsg_50_description) || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
544cd048c05f4ff818016ceea21ff5f5ee62c165
1221
1220
2024-05-23T21:22:53Z
Vrtnis
21
wikitext
text/x-wiki
== ** updates in progress ** ==
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || [URDF](https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description) || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || [URDF](https://github.com/laas/simple_humanoid_description) || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || [URDF](https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description) || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || [URDF](https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description) || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || [URDF](https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf), [MJCF](https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro) || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || [MJCF](https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand) || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || [MJCF](https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85), [URDF](https://github.com/a-price/robotiq_arg85_description), [Xacro](https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization) || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || [URDF](https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots) || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || [SDF](https://github.com/RobotLocomotion/models/tree/master/wsg_50_description) || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
cb120f09e43108c29e5cc916543ddae09558e4df
1222
1221
2024-05-24T00:54:23Z
Vrtnis
21
wikitext
text/x-wiki
== ** updates in progress ** ==
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [URDF](https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description) || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [URDF](https://github.com/laas/simple_humanoid_description) || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [URDF](https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description) || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [URDF](https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description) || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [URDF](https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf), [MJCF](https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro) || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [MJCF](https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand) || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [MJCF](https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85), [URDF](https://github.com/a-price/robotiq_arg85_description), [Xacro](https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization) || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [URDF](https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots) || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [SDF](https://github.com/RobotLocomotion/models/tree/master/wsg_50_description) || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
a889d9c3fd779eadddcef184e5976e9bc1aacade
1223
1222
2024-05-24T01:10:35Z
Vrtnis
21
wikitext
text/x-wiki
== ** updates in progress ** ==
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
d824eee2d5cf4d60f44998f60ac18b2bc10d7a4d
1224
1223
2024-05-24T01:11:44Z
Vrtnis
21
wikitext
text/x-wiki
== ** updates in progress ** ==
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
e2311e97f9a0879892e8ab1522fb197ef6369436
1225
1224
2024-05-24T01:16:20Z
Vrtnis
21
wikitext
text/x-wiki
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
ac3bf2e3557cb254bab4bfa2c126da22b7f39009
1226
1225
2024-05-24T01:19:41Z
Vrtnis
21
/* Add drones */
wikitext
text/x-wiki
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== Drones ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| X2 || Skydio || MJCF, URDF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/skydio_x2 MJCF], [https://github.com/lvjonok/skydio_x2_description/blob/master/urdf/skydio_x2.urdf URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Crazyflie 2.0 || Bitcraze || URDF, MJCF || [https://github.com/utiasDSL/gym-pybullet-drones/tree/master/gym_pybullet_drones/assets URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/bitcraze_crazyflie_2 MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Crazyflie 1.0 || Bitcraze || Xacro || [https://github.com/whoenig/crazyflie_ros/tree/master/crazyflie_description Xacro] || MIT || ✔️ || ✔️ || ✖️
|}
ca443b85a6d22ae008ff91bd2251f95b9490d638
1227
1226
2024-05-24T01:27:03Z
Vrtnis
21
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://media.kscale.dev/stompy/latest_stl_urdf.tar.gz URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== Drones ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| X2 || Skydio || MJCF, URDF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/skydio_x2 MJCF], [https://github.com/lvjonok/skydio_x2_description/blob/master/urdf/skydio_x2.urdf URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Crazyflie 2.0 || Bitcraze || URDF, MJCF || [https://github.com/utiasDSL/gym-pybullet-drones/tree/master/gym_pybullet_drones/assets URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/bitcraze_crazyflie_2 MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Crazyflie 1.0 || Bitcraze || Xacro || [https://github.com/whoenig/crazyflie_ros/tree/master/crazyflie_description Xacro] || MIT || ✔️ || ✔️ || ✖️
|}
bbc54f69bf28162ca4128e4963b718b0ad812481
1228
1227
2024-05-24T01:27:35Z
Vrtnis
21
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://media.kscale.dev/stompy/latest_stl_urdf.tar.gz URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
98a1eb7ff17908e63052b7267d6d8176529c852f
1229
1228
2024-05-24T01:27:55Z
Vrtnis
21
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://media.kscale.dev/stompy/latest_stl_urdf.tar.gz URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
6038fa81cefc3547ef07c13804d9a8b329a54e2d
1230
1229
2024-05-24T02:14:28Z
Vrtnis
21
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
53cd290136aee83c13bdcc99284dc34d1e823c43
Allen's Reinforcement Learning Notes
0
270
1231
1167
2024-05-24T20:07:08Z
108.211.178.220
0
wikitext
text/x-wiki
Allen's reinforcement learning notes
=== Links ===
* [https://rail.eecs.berkeley.edu/deeprlcourse-fa19/ Berkeley CS285]
* [https://www.youtube.com/watch?v=SupFHGbytvA&list=PL_iWQOsE6TfVYGEGiAOMaOzzv41Jfm_Ps Sergey Levine RL Lecture]
[[Category:Reinforcement Learning]]
=== Motivation ===
Consider a problem where we have to train a robot to pick up some object. A traditional ML algorithm might try to learn some function f(x) = y, where given some position x observed via the camera we output some behavior y. The trouble is that in the real world, the correct grab location is some function of the object and the physical environment, which is hard to intuitively ascertain by observation.
The motivation behind reinforcement learning is to repeatedly take observations, then sample the effects of actions on those observations (reward and new observation/state). Ultimately, we hope to create a policy pi that maps states or observations to actions.
=== Learning ===
Learning involves the agent taking actions and the environment returning a new state and reward.
* Input: <math>s_t</math>: States at each time step
* Output: <math>a_t</math>: Actions at each time step
* Data: <math>(s_1, a_1, r_1, ... , s_T, a_T, r_T)</math>
* Learn <math>\pi_\theta : s_t -> a_t </math> to maximize <math> \sum_t r_t </math>
=== State vs. Observation ===
A state is a complete representation of the physical world while the observation is some subset or representation of s. They are not necessarily the same in that we can't always infer s_t from o_t, but o_t is inferable from s_t. To think of it as a network of conditional probability, we have
* <math> s_1 -> o_1 - (\pi_\theta) -> a_1 </math> (policy)
* <math> s_1, a_1 - (p(s_{t+1} | s_t, a_t) -> s_2 </math> (dynamics)
Note that theta represents the parameters of the policy (for example, the parameters of a neural network). Assumption: Markov Property - Future states are independent of past states given present states. This is the fundamental difference between states and observations.
=== Problem Representation ===
States and actions are typically continuous - thus, we often want to model our output policy as a density function, which tells us the distribution of probabilities of actions at some given state.
The reward is a function of the state and action r(s, a) -> int, which tells us what states and actions are better. We often use and tune hyperparameters for reward functions to make model training faster
=== Markov Chain & Decision Process===
Markov Chain: <math> M = {S, T} </math>, where S - state space, T- transition operator. The state space is the set of all states, and can be discrete or continuous. The transition probabilities is represented in a matrix, where the i,j'th entry is the probability of going into state i at state j, and we can express the next time step by multiplying the current time step with the transition operator.
Markov Decision Process: <math> M = {S, A, T, r} </math>, where A - action space. T is now a tensor, containing the current state, current action, and next state. We let <math> T_{i, j, k} = p(s_t + 1 = i | s_t = j, a_t = k) </math>. r is the reward function.
=== Reinforcement Learning Algorithms - High-level ===
# Generate Samples (run policy)
# Fit a model/estimate something about how well policy is performing
# Improve policy
# Repeat
Policy Gradients - Directly differentiate objective with respect to the optimal theta and then perform gradient descent
Value-based: Estimate value function or q-function of optimal policy (policy is often represented implicitly)
Actor-Critic: Estimate value function or q-function of current policy, and find a better policy gradient
Model-based: Estimate some transition model, and then use it to improve a policy
=== REINFORCE ===
-
=== Temporal Difference Learning ===
Temporal Difference (TD) is a model for estimating the utility of states given some state-action-outcome information. Suppose we have some initial value <math>V_0(s) </math>, and we get some information <math> (s, a, s', r(s, a) </math>. We can then use the update equation <math>V_{t+1}(s) = (1- \alpha)V_{t}(s)+\alpha(R(s, a, s') + \gamma V_i(s')) </math>. Here <math>\alpha</math> represents the learning rate, which is how much new information is weighted relative to old information, while <math>\gamma</math> represents the discount factor, which can be thought of how much getting a reward in the future factors into our current reward.
=== Q Learning ===
Q Learning gives us a way to extract the optimal policy after learning. Instead of keeping track of the values of individual states, we keep track of Q values for state-action pairs, representing the utility of taking action a at state s.
How do we use this Q value? Two main ideas.
Idea 1: Policy iteration - if we have a policy <math> \pi </math> and we know <math> Q^pi (s, a) </math>, we can improve the policy, by deterministically setting the action at each state be the argmax of all possible actions at the state.
<math> Q_{i+1} (s,a) = (1 - \alpha) Q_i (s,a) + \alpha (r(s,a) + \gamma V_i(s'))</math>
Idea 2: Gradient update - If <math> Q^pi(s, a) > V^pi(s) </math>, then a is better than average. We will then modify the policy to increase the probability of a.
d8b4ab53a0b25aa0c9633209de324325f8a35bf2
1232
1231
2024-05-24T20:09:45Z
Allen12
15
wikitext
text/x-wiki
Allen's reinforcement learning notes
=== Links ===
* [https://rail.eecs.berkeley.edu/deeprlcourse-fa19/ Berkeley CS285]
* [https://www.youtube.com/watch?v=SupFHGbytvA&list=PL_iWQOsE6TfVYGEGiAOMaOzzv41Jfm_Ps Sergey Levine RL Lecture]
[[Category:Reinforcement Learning]]
=== Motivation ===
Consider a problem where we have to train a robot to pick up some object. A traditional ML algorithm might try to learn some function f(x) = y, where given some position x observed via the camera we output some behavior y. The trouble is that in the real world, the correct grab location is some function of the object and the physical environment, which is hard to intuitively ascertain by observation.
The motivation behind reinforcement learning is to repeatedly take observations, then sample the effects of actions on those observations (reward and new observation/state). Ultimately, we hope to create a policy <math>pi</math> that maps states or observations to optimal actions.
=== Learning ===
Learning involves the agent taking actions and the environment returning a new state and reward.
* Input: <math>s_t</math>: States at each time step
* Output: <math>a_t</math>: Actions at each time step
* Data: <math>(s_1, a_1, r_1, ... , s_T, a_T, r_T)</math>
* Learn <math>\pi_\theta : s_t -> a_t </math> to maximize <math> \sum_t r_t </math>
=== State vs. Observation ===
A state is a complete representation of the physical world while the observation is some subset or representation of s. They are not necessarily the same in that we can't always infer s_t from o_t, but o_t is inferable from s_t. To think of it as a network of conditional probability, we have
* <math> s_1 -> o_1 - (\pi_\theta) -> a_1 </math> (policy)
* <math> s_1, a_1 - (p(s_{t+1} | s_t, a_t) -> s_2 </math> (dynamics)
Note that theta represents the parameters of the policy (for example, the parameters of a neural network). Assumption: Markov Property - Future states are independent of past states given present states. This is the fundamental difference between states and observations.
=== Problem Representation ===
States and actions are typically continuous - thus, we often want to model our output policy as a density function, which tells us the distribution of probabilities of actions at some given state.
The reward is a function of the state and action r(s, a) -> int, which tells us what states and actions are better. We often use and tune hyperparameters for reward functions to make model training faster
=== Markov Chain & Decision Process===
Markov Chain: <math> M = {S, T} </math>, where S - state space, T- transition operator. The state space is the set of all states, and can be discrete or continuous. The transition probabilities is represented in a matrix, where the i,j'th entry is the probability of going into state i at state j, and we can express the next time step by multiplying the current time step with the transition operator.
Markov Decision Process: <math> M = {S, A, T, r} </math>, where A - action space. T is now a tensor, containing the current state, current action, and next state. We let <math> T_{i, j, k} = p(s_t + 1 = i | s_t = j, a_t = k) </math>. r is the reward function.
=== Reinforcement Learning Algorithms - High-level ===
# Generate Samples (run policy)
# Fit a model/estimate something about how well policy is performing
# Improve policy
# Repeat
Policy Gradients - Directly differentiate objective with respect to the optimal theta and then perform gradient descent
Value-based: Estimate value function or q-function of optimal policy (policy is often represented implicitly)
Actor-Critic: Estimate value function or q-function of current policy, and find a better policy gradient
Model-based: Estimate some transition model, and then use it to improve a policy
=== REINFORCE ===
-
=== Temporal Difference Learning ===
Temporal Difference (TD) is a model for estimating the utility of states given some state-action-outcome information. Suppose we have some initial value <math>V_0(s) </math>, and we get some information <math> (s, a, s', r(s, a) </math>. We can then use the update equation <math>V_{t+1}(s) = (1- \alpha)V_{t}(s)+\alpha(R(s, a, s') + \gamma V_i(s')) </math>. Here <math>\alpha</math> represents the learning rate, which is how much new information is weighted relative to old information, while <math>\gamma</math> represents the discount factor, which can be thought of how much getting a reward in the future factors into our current reward.
=== Q Learning ===
Q Learning gives us a way to extract the optimal policy after learning. Instead of keeping track of the values of individual states, we keep track of Q values for state-action pairs, representing the utility of taking action a at state s.
How do we use this Q value? Two main ideas.
Idea 1: Policy iteration - if we have a policy <math> \pi </math> and we know <math> Q^pi (s, a) </math>, we can improve the policy, by deterministically setting the action at each state be the argmax of all possible actions at the state.
<math> Q_{i+1} (s,a) = (1 - \alpha) Q_i (s,a) + \alpha (r(s,a) + \gamma V_i(s'))</math>
Idea 2: Gradient update - If <math> Q^pi(s, a) > V^pi(s) </math>, then a is better than average. We will then modify the policy to increase the probability of a.
c2b41c8f012378f234ab70ff016f29778e0673c6
1250
1232
2024-05-24T23:55:35Z
Allen12
15
wikitext
text/x-wiki
Allen's reinforcement learning notes
=== Links ===
* [https://rail.eecs.berkeley.edu/deeprlcourse-fa19/ Berkeley CS285]
* [https://www.youtube.com/watch?v=SupFHGbytvA&list=PL_iWQOsE6TfVYGEGiAOMaOzzv41Jfm_Ps Sergey Levine RL Lecture]
[[Category:Reinforcement Learning]]
=== Motivation ===
Consider a problem where we have to train a robot to pick up some object. A traditional ML algorithm might try to learn some function f(x) = y, where given some position x observed via the camera we output some behavior y. The trouble is that in the real world, the correct grab location is some function of the object and the physical environment, which is hard to intuitively ascertain by observation.
The motivation behind reinforcement learning is to repeatedly take observations, then sample the effects of actions on those observations (reward and new observation/state). Ultimately, we hope to create a policy <math>pi</math> that maps states or observations to optimal actions.
=== Learning ===
Learning involves the agent taking actions and the environment returning a new state and reward.
* Input: <math>s_t</math>: States at each time step
* Output: <math>a_t</math>: Actions at each time step
* Data: <math>(s_1, a_1, r_1, ... , s_T, a_T, r_T)</math>
* Learn <math>\pi_\theta : s_t -> a_t </math> to maximize <math> \sum_t r_t </math>
=== State vs. Observation ===
A state is a complete representation of the physical world while the observation is some subset or representation of s. They are not necessarily the same in that we can't always know what s_t is from o_t, but o_t is inferable from s_t. To think of it as a network of conditional probability, we have
* <math> s_1 -> o_1 - (\pi_\theta) -> a_1 </math> (policy)
* <math> s_1, a_1 - (p(s_{t+1} | s_t, a_t) -> s_2 </math> (dynamics)
Note that theta represents the parameters of the policy (for example, the parameters of a neural network). Assumption: Markov Property - Future states are independent of past states given present states. This is the fundamental difference between states and observations.
=== Problem Representation ===
States and actions are typically continuous - thus, we often want to model our output policy as a density function, which tells us the distribution of probabilities of actions at some given state.
The reward is a function of the state and action r(s, a) -> int, which tells us what states and actions are better. We often use and tune hyperparameters for reward functions to make model training faster
=== Markov Chain & Decision Process===
Markov Chain: <math> M = {S, T} </math>, where S - state space, T- transition operator. The state space is the set of all states, and can be discrete or continuous. The transition probabilities is represented in a matrix, where the i,j'th entry is the probability of going into state i at state j, and we can express the next time step by multiplying the current time step with the transition operator.
Markov Decision Process: <math> M = {S, A, T, r} </math>, where A - action space. T is now a tensor, containing the current state, current action, and next state. We let <math> T_{i, j, k} = p(s_t + 1 = i | s_t = j, a_t = k) </math>. r is the reward function.
=== Reinforcement Learning Algorithms - High-level ===
# Generate Samples (run policy)
# Fit a model/estimate something about how well policy is performing
# Improve policy
# Repeat
Policy Gradients - Directly differentiate objective with respect to the optimal theta and then perform gradient descent
Value-based: Estimate value function or q-function of optimal policy (policy is often represented implicitly)
Actor-Critic: Estimate value function or q-function of current policy, and find a better policy gradient
Model-based: Estimate some transition model, and then use it to improve a policy
=== REINFORCE ===
-
=== Temporal Difference Learning ===
Temporal Difference (TD) is a model for estimating the utility of states given some state-action-outcome information. Suppose we have some initial value <math>V_0(s) </math>, and we get some information <math> (s, a, s', r(s, a) </math>. We can then use the update equation <math>V_{t+1}(s) = (1- \alpha)V_{t}(s)+\alpha(R(s, a, s') + \gamma V_i(s')) </math>. Here <math>\alpha</math> represents the learning rate, which is how much new information is weighted relative to old information, while <math>\gamma</math> represents the discount factor, which can be thought of how much getting a reward in the future factors into our current reward.
=== Q Learning ===
Q Learning gives us a way to extract the optimal policy after learning. Instead of keeping track of the values of individual states, we keep track of Q values for state-action pairs, representing the utility of taking action a at state s.
How do we use this Q value? Two main ideas.
Idea 1: Policy iteration - if we have a policy <math> \pi </math> and we know <math> Q^pi (s, a) </math>, we can improve the policy, by deterministically setting the action at each state be the argmax of all possible actions at the state.
<math> Q_{i+1} (s,a) = (1 - \alpha) Q_i (s,a) + \alpha (r(s,a) + \gamma V_i(s'))</math>
Idea 2: Gradient update - If <math> Q^pi(s, a) > V^pi(s) </math>, then a is better than average. We will then modify the policy to increase the probability of a.
62e08d2e5aabfb712b9bc7966fd45f5c90251eb3
Allen's REINFORCE notes
0
282
1233
2024-05-24T20:11:06Z
Allen12
15
Created page with "Allen's REINFORCE notes === Links === * [http://www.incompleteideas.net/book/RLbook2020.pdf] [[Category:Reinforcement Learning]] === Motivation === Consider a problem wh..."
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf]
[[Category:Reinforcement Learning]]
=== Motivation ===
Consider a problem where we have to train a robot to pick up some object. A traditional ML algorithm might try to learn some function f(x) = y, where given some position x observed via the camera we output some behavior y. The trouble is that in the real world, the correct grab location is some function of the object and the physical environment, which is hard to intuitively ascertain by observation.
The motivation behind reinforcement learning is to repeatedly take observations, then sample the effects of actions on those observations (reward and new observation/state). Ultimately, we hope to create a policy <math>pi</math> that maps states or observations to optimal actions.
=== Learning ===
Learning involves the agent taking actions and the environment returning a new state and reward.
* Input: <math>s_t</math>: States at each time step
* Output: <math>a_t</math>: Actions at each time step
* Data: <math>(s_1, a_1, r_1, ... , s_T, a_T, r_T)</math>
* Learn <math>\pi_\theta : s_t -> a_t </math> to maximize <math> \sum_t r_t </math>
=== State vs. Observation ===
A state is a complete representation of the physical world while the observation is some subset or representation of s. They are not necessarily the same in that we can't always infer s_t from o_t, but o_t is inferable from s_t. To think of it as a network of conditional probability, we have
* <math> s_1 -> o_1 - (\pi_\theta) -> a_1 </math> (policy)
* <math> s_1, a_1 - (p(s_{t+1} | s_t, a_t) -> s_2 </math> (dynamics)
Note that theta represents the parameters of the policy (for example, the parameters of a neural network). Assumption: Markov Property - Future states are independent of past states given present states. This is the fundamental difference between states and observations.
=== Problem Representation ===
States and actions are typically continuous - thus, we often want to model our output policy as a density function, which tells us the distribution of probabilities of actions at some given state.
The reward is a function of the state and action r(s, a) -> int, which tells us what states and actions are better. We often use and tune hyperparameters for reward functions to make model training faster
=== Markov Chain & Decision Process===
Markov Chain: <math> M = {S, T} </math>, where S - state space, T- transition operator. The state space is the set of all states, and can be discrete or continuous. The transition probabilities is represented in a matrix, where the i,j'th entry is the probability of going into state i at state j, and we can express the next time step by multiplying the current time step with the transition operator.
Markov Decision Process: <math> M = {S, A, T, r} </math>, where A - action space. T is now a tensor, containing the current state, current action, and next state. We let <math> T_{i, j, k} = p(s_t + 1 = i | s_t = j, a_t = k) </math>. r is the reward function.
=== Reinforcement Learning Algorithms - High-level ===
# Generate Samples (run policy)
# Fit a model/estimate something about how well policy is performing
# Improve policy
# Repeat
Policy Gradients - Directly differentiate objective with respect to the optimal theta and then perform gradient descent
Value-based: Estimate value function or q-function of optimal policy (policy is often represented implicitly)
Actor-Critic: Estimate value function or q-function of current policy, and find a better policy gradient
Model-based: Estimate some transition model, and then use it to improve a policy
=== REINFORCE ===
-
=== Temporal Difference Learning ===
Temporal Difference (TD) is a model for estimating the utility of states given some state-action-outcome information. Suppose we have some initial value <math>V_0(s) </math>, and we get some information <math> (s, a, s', r(s, a) </math>. We can then use the update equation <math>V_{t+1}(s) = (1- \alpha)V_{t}(s)+\alpha(R(s, a, s') + \gamma V_i(s')) </math>. Here <math>\alpha</math> represents the learning rate, which is how much new information is weighted relative to old information, while <math>\gamma</math> represents the discount factor, which can be thought of how much getting a reward in the future factors into our current reward.
=== Q Learning ===
Q Learning gives us a way to extract the optimal policy after learning. Instead of keeping track of the values of individual states, we keep track of Q values for state-action pairs, representing the utility of taking action a at state s.
How do we use this Q value? Two main ideas.
Idea 1: Policy iteration - if we have a policy <math> \pi </math> and we know <math> Q^pi (s, a) </math>, we can improve the policy, by deterministically setting the action at each state be the argmax of all possible actions at the state.
<math> Q_{i+1} (s,a) = (1 - \alpha) Q_i (s,a) + \alpha (r(s,a) + \gamma V_i(s'))</math>
Idea 2: Gradient update - If <math> Q^pi(s, a) > V^pi(s) </math>, then a is better than average. We will then modify the policy to increase the probability of a.
86b2547c7288b924baa6045775913215963663a2
1234
1233
2024-05-24T20:11:25Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf]
[[Category:Reinforcement Learning]]
=== Motivation ===
=== Learning ===
Learning involves the agent taking actions and the environment returning a new state and reward.
* Input: <math>s_t</math>: States at each time step
* Output: <math>a_t</math>: Actions at each time step
* Data: <math>(s_1, a_1, r_1, ... , s_T, a_T, r_T)</math>
* Learn <math>\pi_\theta : s_t -> a_t </math> to maximize <math> \sum_t r_t </math>
=== State vs. Observation ===
4427f0ba1681a908e5a390862d6ba2379b31ae3e
1235
1234
2024-05-24T20:24:12Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf /RLbook2020]
[[Category:Reinforcement Learning]]
=== Motivation ===
=== Learning ===
Learning involves the agent taking actions and the environment returning a new state and reward.
* Input: <math>s_t</math>: States at each time step
* Output: <math>a_t</math>: Actions at each time step
* Data: <math>(s_1, a_1, r_1, ... , s_T, a_T, r_T)</math>
* Learn <math>\pi_\theta : s_t -> a_t </math> to maximize <math> \sum_t r_t </math>
=== State vs. Observation ===
fc04d2a33d3a64b26cbf95d98c19a19b2d10ddc9
1236
1235
2024-05-24T20:24:22Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
[[Category:Reinforcement Learning]]
=== Motivation ===
=== Learning ===
Learning involves the agent taking actions and the environment returning a new state and reward.
* Input: <math>s_t</math>: States at each time step
* Output: <math>a_t</math>: Actions at each time step
* Data: <math>(s_1, a_1, r_1, ... , s_T, a_T, r_T)</math>
* Learn <math>\pi_\theta : s_t -> a_t </math> to maximize <math> \sum_t r_t </math>
=== State vs. Observation ===
fb450ca3cb73af7221f1e6713c4b667c94dc8416
1237
1236
2024-05-24T21:41:42Z
Allen12
15
/* Motivation */
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math>\pi^*<\math> which we encode in a neural network with parameters <math>\theta^*<\math>. These optimal parameters are defined as
<math>\theta^* = \text<argmax>_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] <\math>
=== Learning ===
Learning involves the agent taking actions and the environment returning a new state and reward.
* Input: <math>s_t</math>: States at each time step
* Output: <math>a_t</math>: Actions at each time step
* Data: <math>(s_1, a_1, r_1, ... , s_T, a_T, r_T)</math>
* Learn <math>\pi_\theta : s_t -> a_t </math> to maximize <math> \sum_t r_t </math>
=== State vs. Observation ===
cf8c15f7202d4ad8a6ee6c349f0fc683ad61c030
1238
1237
2024-05-24T21:42:01Z
Allen12
15
/* Motivation */
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math>\pi^*<\math> which we encode in a neural network with parameters <math>\theta^*</math>. These optimal parameters are defined as
<math>\theta^* = \text<argmax>_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>
=== Learning ===
Learning involves the agent taking actions and the environment returning a new state and reward.
* Input: <math>s_t</math>: States at each time step
* Output: <math>a_t</math>: Actions at each time step
* Data: <math>(s_1, a_1, r_1, ... , s_T, a_T, r_T)</math>
* Learn <math>\pi_\theta : s_t -> a_t </math> to maximize <math> \sum_t r_t </math>
=== State vs. Observation ===
64d04a9b522c8c98f7bd207b5bddd45b5e36b97b
1239
1238
2024-05-24T21:42:21Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. These optimal parameters are defined as
<math>\theta^* = \text<argmax>_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>
=== Learning ===
Learning involves the agent taking actions and the environment returning a new state and reward.
* Input: <math>s_t</math>: States at each time step
* Output: <math>a_t</math>: Actions at each time step
* Data: <math>(s_1, a_1, r_1, ... , s_T, a_T, r_T)</math>
* Learn <math>\pi_\theta : s_t -> a_t </math> to maximize <math> \sum_t r_t </math>
=== State vs. Observation ===
dc482ab67855780225cdd65199e9e32a0f20f0c2
1240
1239
2024-05-24T21:43:50Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>
=== Learning ===
Learning involves the agent taking actions and the environment returning a new state and reward.
* Input: <math>s_t</math>: States at each time step
* Output: <math>a_t</math>: Actions at each time step
* Data: <math>(s_1, a_1, r_1, ... , s_T, a_T, r_T)</math>
* Learn <math>\pi_\theta : s_t -> a_t </math> to maximize <math> \sum_t r_t </math>
=== State vs. Observation ===
afa2a2a0629c983a8716b35019d6b221dc436e02
1241
1240
2024-05-24T21:46:04Z
Allen12
15
/* Motivation */
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory determined by the policy is the highest over all policies.
=== Learning ===
Learning involves the agent taking actions and the environment returning a new state and reward.
* Input: <math>s_t</math>: States at each time step
* Output: <math>a_t</math>: Actions at each time step
* Data: <math>(s_1, a_1, r_1, ... , s_T, a_T, r_T)</math>
* Learn <math>\pi_\theta : s_t -> a_t </math> to maximize <math> \sum_t r_t </math>
=== State vs. Observation ===
5d5ff0def524373ab8faea25bc516cf0fc62d168
1251
1241
2024-05-24T23:58:17Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
# Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions. Remember a policy is a mapping from observations to outputs. If the space is continuous, it may make more sense to make output be one mean and one standard deviation for each component of the action.
# Repeat:
=== State vs. Observation ===
b06fbf05ed8ed2344a3ef915eddf013135276bb3
1252
1251
2024-05-25T00:03:34Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
# Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions. Remember a policy is a mapping from observations to outputs. If the space is continuous, it may make more sense to make output be one mean and one standard deviation for each component of the action.
<syntaxhighlight lang="python" line>
# For # of episodes:
## While not terminated:
### Get observation from environment
### Use policy network to map observation to action distribution
### Randomly sample one action from action distribution
### Compute logarithmic probability of that action occurring
### Step environment using action and store reward
## Calculate loss over entire trajectory as function of probabilities and rewards
</syntaxhighlight>
=== Loss Function ===
c9e07c8cf509e9f20c1e46916b0ce522df3d288d
1253
1252
2024-05-25T00:05:30Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" line>
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For # of episodes:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
</syntaxhighlight>
=== Loss Function ===
94090e441140e894774e82de1ca44232ec590fad
1254
1253
2024-05-25T00:05:44Z
Allen12
15
/* Overview */
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" line>
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For \# of episodes:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
</syntaxhighlight>
=== Loss Function ===
2962a5d181a846e810724c04e71deb7522bb34d4
CAN Daddy
0
283
1242
2024-05-24T23:39:45Z
Vedant
24
Created page with "=== CAN Daddy === == Design Requirements =="
wikitext
text/x-wiki
=== CAN Daddy ===
== Design Requirements ==
52a6aaea9cae6a897b368d0991480b197c9d2948
1243
1242
2024-05-24T23:39:53Z
Vedant
24
/* CAN Daddy */
wikitext
text/x-wiki
== Design Requirements ==
bd8492fb86d2336f8b4e0db4807fd4489c171f4d
1244
1243
2024-05-24T23:40:05Z
Vedant
24
/* Design Requirements */
wikitext
text/x-wiki
== Design Requirements ==
*Hello there*
868027d6f003fdca4af4d8cf08c43242a634630a
1245
1244
2024-05-24T23:40:13Z
Vedant
24
/* Design Requirements */
wikitext
text/x-wiki
== Design Requirements ==
*Hello there
asdf
15a533bf51098cfa5d682d55b0869604cb10d26c
1246
1245
2024-05-24T23:40:22Z
Vedant
24
/* Design Requirements */
wikitext
text/x-wiki
== Design Requirements ==
*Hello there
d7a5bff3a94654e6334a2803e10a759200b5e791
1247
1246
2024-05-24T23:40:55Z
Vedant
24
/* Design Requirements */
wikitext
text/x-wiki
== Design Requirements ==
* Means bullet point
= * n means a header of some kind. Need to wrap it.
73f5f5fa681ea7cfb2a2c8675a1345ebf4e97381
K-Scale Cluster
0
16
1248
1040
2024-05-24T23:51:44Z
Ben
2
/* Andromeda Cluster */
wikitext
text/x-wiki
The K-Scale Labs clusters are shared cluster for robotics research. This page contains notes on how to access the clusters.
== Onboarding ==
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
=== Lambda Cluster ===
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config</code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
After setting this up, you can use the command <code>ssh cluster</code> to directly connect.
You can also access via VS Code. Tutorial of using <code>ssh</code> in VS Code is [https://code.visualstudio.com/docs/remote/ssh-tutorial here].
Please inform us if you have any issues!
=== Notes ===
* You may need to restart <code>ssh</code> to get it working.
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
=== Andromeda Cluster ===
The Andromeda cluster is a different cluster which uses Slurm for job management. Authentication is different from the Lambda cluster - Ben will provide instructions directly.
Don't do anything computationally expensive on the main node or you will crash it for everyone. Instead, when you need to run some experiments, reserve a GPU (see below).
==== Reserving a GPU ====
Here is a script you can use for getting an interactive node through Slurm.
<syntaxhighlight lang="bash">
gpunode () {
local job_id=$(squeue -u $USER -h -t R -o %i -n gpunode)
if [[ -n $job_id ]]
then
echo "Attaching to job ID $job_id"
srun --jobid=$job_id --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --pty $SLURM_XPUNODE_SHELL
return 0
fi
echo "Creating new job"
srun --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --interactive --job-name=gpunode --pty $SLURM_XPUNODE_SHELL
}
</syntaxhighlight>
Example env vars:
<syntaxhighlight lang="bash">
export SLURM_GPUNODE_PARTITION='compute'
export SLURM_GPUNODE_NUM_GPUS=1
export SLURM_GPUNODE_CPUS_PER_GPU=4
export SLURM_XPUNODE_SHELL='/bin/bash'
</syntaxhighlight>
Integrate the example script into your shell then run <code>gpunode</code>.
You can see partition options by running <code>sinfo</code>.
You might get an error like this: <code>groups: cannot find name for group ID 1506</code>. But things should still run fine. Check with <code>nvidia-smi</code>.
==== Useful Commands ====
Set a node state back to normal:
<syntaxhighlight lang="bash">
sudo scontrol update nodename='nodename' state=resume
</syntaxhighlight>
[[Category:K-Scale]]
a96941b1701791c89cc07584a7c12532e8ff6816
1249
1248
2024-05-24T23:55:00Z
Ben
2
/* Andromeda Cluster */
wikitext
text/x-wiki
The K-Scale Labs clusters are shared cluster for robotics research. This page contains notes on how to access the clusters.
== Onboarding ==
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
=== Lambda Cluster ===
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config</code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
After setting this up, you can use the command <code>ssh cluster</code> to directly connect.
You can also access via VS Code. Tutorial of using <code>ssh</code> in VS Code is [https://code.visualstudio.com/docs/remote/ssh-tutorial here].
Please inform us if you have any issues!
=== Notes ===
* You may need to restart <code>ssh</code> to get it working.
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
=== Andromeda Cluster ===
The Andromeda cluster is a different cluster which uses Slurm for job management. Authentication is different from the Lambda cluster - Ben will provide instructions directly.
Don't do anything computationally expensive on the main node or you will crash it for everyone. Instead, when you need to run some experiments, reserve a GPU (see below).
==== SLURM Commands ====
Show all currently running jobs:
<syntaxhighlight lang="bash">
squeue
</syntaxhighlight>
Show your own running jobs:
<syntaxhighlight lang="bash">
squeue --me
</syntaxhighlight>
Show the available partitions on the cluster:
<syntaxhighlight lang="bash">
sinfo
</syntaxhighlight>
You'll see something like this:
<syntaxhighlight lang="bash">
$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
compute* up infinite 8 idle compute-permanent-node-[68,285,493,580,625-626,749,801]
</syntaxhighlight>
This means:
* There is one compute node type, called <code>compute</code>
* There are 8 nodes of that type, all currently in <code>idle</code> state
* The node names are things like <code>compute-permanent-node-68</code>
==== Reserving a GPU ====
Here is a script you can use for getting an interactive node through Slurm.
<syntaxhighlight lang="bash">
gpunode () {
local job_id=$(squeue -u $USER -h -t R -o %i -n gpunode)
if [[ -n $job_id ]]
then
echo "Attaching to job ID $job_id"
srun --jobid=$job_id --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --pty $SLURM_XPUNODE_SHELL
return 0
fi
echo "Creating new job"
srun --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --interactive --job-name=gpunode --pty $SLURM_XPUNODE_SHELL
}
</syntaxhighlight>
Example env vars:
<syntaxhighlight lang="bash">
export SLURM_GPUNODE_PARTITION='compute'
export SLURM_GPUNODE_NUM_GPUS=1
export SLURM_GPUNODE_CPUS_PER_GPU=4
export SLURM_XPUNODE_SHELL='/bin/bash'
</syntaxhighlight>
Integrate the example script into your shell then run <code>gpunode</code>.
You can see partition options by running <code>sinfo</code>.
You might get an error like this: <code>groups: cannot find name for group ID 1506</code>. But things should still run fine. Check with <code>nvidia-smi</code>.
==== Useful Commands ====
Set a node state back to normal:
<syntaxhighlight lang="bash">
sudo scontrol update nodename='nodename' state=resume
</syntaxhighlight>
[[Category:K-Scale]]
ba245573d96b2a29375532ad5e339aedfcc58286
1255
1249
2024-05-25T00:06:27Z
Ben
2
/* SLURM Commands */
wikitext
text/x-wiki
The K-Scale Labs clusters are shared cluster for robotics research. This page contains notes on how to access the clusters.
== Onboarding ==
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
=== Lambda Cluster ===
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config</code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
After setting this up, you can use the command <code>ssh cluster</code> to directly connect.
You can also access via VS Code. Tutorial of using <code>ssh</code> in VS Code is [https://code.visualstudio.com/docs/remote/ssh-tutorial here].
Please inform us if you have any issues!
=== Notes ===
* You may need to restart <code>ssh</code> to get it working.
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
=== Andromeda Cluster ===
The Andromeda cluster is a different cluster which uses Slurm for job management. Authentication is different from the Lambda cluster - Ben will provide instructions directly.
Don't do anything computationally expensive on the main node or you will crash it for everyone. Instead, when you need to run some experiments, reserve a GPU (see below).
==== SLURM Commands ====
Show all currently running jobs:
<syntaxhighlight lang="bash">
squeue
</syntaxhighlight>
Show your own running jobs:
<syntaxhighlight lang="bash">
squeue --me
</syntaxhighlight>
Show the available partitions on the cluster:
<syntaxhighlight lang="bash">
sinfo
</syntaxhighlight>
You'll see something like this:
<syntaxhighlight lang="bash">
$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
compute* up infinite 8 idle compute-permanent-node-[68,285,493,580,625-626,749,801]
</syntaxhighlight>
This means:
* There is one compute node type, called <code>compute</code>
* There are 8 nodes of that type, all currently in <code>idle</code> state
* The node names are things like <code>compute-permanent-node-68</code>
To launch a job, use [https://slurm.schedmd.com/srun.html srun] or [https://slurm.schedmd.com/sbatch.html sbatch].
* '''srun''' runs a command directly with the requested resources
* '''sbatch''' queues the job to run when resources become available
For example, suppose I have the following Shell script:
<syntaxhighlight lang="bash">
#!/bin/bash
echo "Hello, world!"
nvidia-smi
</syntaxhighlight>
I can use <code>srun</code> to run this script with the following result:
<syntaxhighlight lang="bash">
$ srun --gpus 8 ./test.sh
Hello, world!
Sat May 25 00:02:23 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
... truncated
</syntaxhiglight>
Alternatively, I can queue the job using <code>sbatch</code>, which gives me the following result:
<syntaxhighlight lang="bash">
$ sbatch test.sh
Submitted batch job 461
</syntaxhighlight>
After launching the job, we can see it running using our original <code>squeue</code> command:
<syntaxhighlight lang="bash">
$ squeue --me
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
461 compute test.sh ben R 0:37 1 compute-permanent-node-285
</syntaxhighlight>
==== Reserving a GPU ====
Here is a script you can use for getting an interactive node through Slurm.
<syntaxhighlight lang="bash">
gpunode () {
local job_id=$(squeue -u $USER -h -t R -o %i -n gpunode)
if [[ -n $job_id ]]
then
echo "Attaching to job ID $job_id"
srun --jobid=$job_id --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --pty $SLURM_XPUNODE_SHELL
return 0
fi
echo "Creating new job"
srun --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --interactive --job-name=gpunode --pty $SLURM_XPUNODE_SHELL
}
</syntaxhighlight>
Example env vars:
<syntaxhighlight lang="bash">
export SLURM_GPUNODE_PARTITION='compute'
export SLURM_GPUNODE_NUM_GPUS=1
export SLURM_GPUNODE_CPUS_PER_GPU=4
export SLURM_XPUNODE_SHELL='/bin/bash'
</syntaxhighlight>
Integrate the example script into your shell then run <code>gpunode</code>.
You can see partition options by running <code>sinfo</code>.
You might get an error like this: <code>groups: cannot find name for group ID 1506</code>. But things should still run fine. Check with <code>nvidia-smi</code>.
==== Useful Commands ====
Set a node state back to normal:
<syntaxhighlight lang="bash">
sudo scontrol update nodename='nodename' state=resume
</syntaxhighlight>
[[Category:K-Scale]]
b121c42578368944e5293d5c0defca46b1fa0b2d
1256
1255
2024-05-25T00:07:19Z
Ben
2
/* SLURM Commands */
wikitext
text/x-wiki
The K-Scale Labs clusters are shared cluster for robotics research. This page contains notes on how to access the clusters.
== Onboarding ==
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
=== Lambda Cluster ===
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config</code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
After setting this up, you can use the command <code>ssh cluster</code> to directly connect.
You can also access via VS Code. Tutorial of using <code>ssh</code> in VS Code is [https://code.visualstudio.com/docs/remote/ssh-tutorial here].
Please inform us if you have any issues!
=== Notes ===
* You may need to restart <code>ssh</code> to get it working.
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
=== Andromeda Cluster ===
The Andromeda cluster is a different cluster which uses Slurm for job management. Authentication is different from the Lambda cluster - Ben will provide instructions directly.
Don't do anything computationally expensive on the main node or you will crash it for everyone. Instead, when you need to run some experiments, reserve a GPU (see below).
==== SLURM Commands ====
Show all currently running jobs:
<syntaxhighlight lang="bash">
squeue
</syntaxhighlight>
Show your own running jobs:
<syntaxhighlight lang="bash">
squeue --me
</syntaxhighlight>
Show the available partitions on the cluster:
<syntaxhighlight lang="bash">
sinfo
</syntaxhighlight>
You'll see something like this:
<syntaxhighlight lang="bash">
$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
compute* up infinite 8 idle compute-permanent-node-[68,285,493,580,625-626,749,801]
</syntaxhighlight>
This means:
* There is one compute node type, called <code>compute</code>
* There are 8 nodes of that type, all currently in <code>idle</code> state
* The node names are things like <code>compute-permanent-node-68</code>
To launch a job, use [https://slurm.schedmd.com/srun.html srun] or [https://slurm.schedmd.com/sbatch.html sbatch].
* '''srun''' runs a command directly with the requested resources
* '''sbatch''' queues the job to run when resources become available
For example, suppose I have the following Shell script:
<syntaxhighlight lang="bash">
#!/bin/bash
echo "Hello, world!"
nvidia-smi
</syntaxhighlight>
I can use <code>srun</code> to run this script with the following result:
<syntaxhighlight lang="bash">
$ srun --gpus 8 ./test.sh
Hello, world!
Sat May 25 00:02:23 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
... truncated
</syntaxhighlight>
Alternatively, I can queue the job using <code>sbatch</code>, which gives me the following result:
<syntaxhighlight lang="bash">
$ sbatch test.sh
Submitted batch job 461
</syntaxhighlight>
After launching the job, we can see it running using our original <code>squeue</code> command:
<syntaxhighlight lang="bash">
$ squeue --me
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
461 compute test.sh ben R 0:37 1 compute-permanent-node-285
</syntaxhighlight>
==== Reserving a GPU ====
Here is a script you can use for getting an interactive node through Slurm.
<syntaxhighlight lang="bash">
gpunode () {
local job_id=$(squeue -u $USER -h -t R -o %i -n gpunode)
if [[ -n $job_id ]]
then
echo "Attaching to job ID $job_id"
srun --jobid=$job_id --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --pty $SLURM_XPUNODE_SHELL
return 0
fi
echo "Creating new job"
srun --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --interactive --job-name=gpunode --pty $SLURM_XPUNODE_SHELL
}
</syntaxhighlight>
Example env vars:
<syntaxhighlight lang="bash">
export SLURM_GPUNODE_PARTITION='compute'
export SLURM_GPUNODE_NUM_GPUS=1
export SLURM_GPUNODE_CPUS_PER_GPU=4
export SLURM_XPUNODE_SHELL='/bin/bash'
</syntaxhighlight>
Integrate the example script into your shell then run <code>gpunode</code>.
You can see partition options by running <code>sinfo</code>.
You might get an error like this: <code>groups: cannot find name for group ID 1506</code>. But things should still run fine. Check with <code>nvidia-smi</code>.
==== Useful Commands ====
Set a node state back to normal:
<syntaxhighlight lang="bash">
sudo scontrol update nodename='nodename' state=resume
</syntaxhighlight>
[[Category:K-Scale]]
28f6d95765ab8c70f7b5ba365d84a4f95646c125
1257
1256
2024-05-25T00:10:45Z
Ben
2
/* SLURM Commands */
wikitext
text/x-wiki
The K-Scale Labs clusters are shared cluster for robotics research. This page contains notes on how to access the clusters.
== Onboarding ==
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
=== Lambda Cluster ===
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config</code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
After setting this up, you can use the command <code>ssh cluster</code> to directly connect.
You can also access via VS Code. Tutorial of using <code>ssh</code> in VS Code is [https://code.visualstudio.com/docs/remote/ssh-tutorial here].
Please inform us if you have any issues!
=== Notes ===
* You may need to restart <code>ssh</code> to get it working.
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
=== Andromeda Cluster ===
The Andromeda cluster is a different cluster which uses Slurm for job management. Authentication is different from the Lambda cluster - Ben will provide instructions directly.
Don't do anything computationally expensive on the main node or you will crash it for everyone. Instead, when you need to run some experiments, reserve a GPU (see below).
==== SLURM Commands ====
Show all currently running jobs:
<syntaxhighlight lang="bash">
squeue
</syntaxhighlight>
Show your own running jobs:
<syntaxhighlight lang="bash">
squeue --me
</syntaxhighlight>
Show the available partitions on the cluster:
<syntaxhighlight lang="bash">
sinfo
</syntaxhighlight>
You'll see something like this:
<syntaxhighlight lang="bash">
$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
compute* up infinite 8 idle compute-permanent-node-[68,285,493,580,625-626,749,801]
</syntaxhighlight>
This means:
* There is one compute node type, called <code>compute</code>
* There are 8 nodes of that type, all currently in <code>idle</code> state
* The node names are things like <code>compute-permanent-node-68</code>
To launch a job, use [https://slurm.schedmd.com/srun.html srun] or [https://slurm.schedmd.com/sbatch.html sbatch].
* '''srun''' runs a command directly with the requested resources
* '''sbatch''' queues the job to run when resources become available
For example, suppose I have the following Shell script:
<syntaxhighlight lang="bash">
#!/bin/bash
echo "Hello, world!"
nvidia-smi
</syntaxhighlight>
I can use <code>srun</code> to run this script with the following result:
<syntaxhighlight lang="bash">
$ srun --gpus 8 ./test.sh
Hello, world!
Sat May 25 00:02:23 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
... truncated
</syntaxhighlight>
Alternatively, I can queue the job using <code>sbatch</code>, which gives me the following result:
<syntaxhighlight lang="bash">
$ sbatch test.sh
Submitted batch job 461
</syntaxhighlight>
After launching the job, we can see it running using our original <code>squeue</code> command:
<syntaxhighlight lang="bash">
$ squeue --me
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
461 compute test.sh ben R 0:37 1 compute-permanent-node-285
</syntaxhighlight>
We can cancel an in-progress job by running <code>scancel</code>:
<syntaxhighlight lang="bash">
scancel 461
</syntaxhighlight>
[https://github.com/kscalelabs/mlfab/blob/master/mlfab/task/launchers/slurm.py#L262-L309 Here is a reference] <code>sbatch</code> script for launching machine learning jobs.
==== Reserving a GPU ====
Here is a script you can use for getting an interactive node through Slurm.
<syntaxhighlight lang="bash">
gpunode () {
local job_id=$(squeue -u $USER -h -t R -o %i -n gpunode)
if [[ -n $job_id ]]
then
echo "Attaching to job ID $job_id"
srun --jobid=$job_id --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --pty $SLURM_XPUNODE_SHELL
return 0
fi
echo "Creating new job"
srun --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --interactive --job-name=gpunode --pty $SLURM_XPUNODE_SHELL
}
</syntaxhighlight>
Example env vars:
<syntaxhighlight lang="bash">
export SLURM_GPUNODE_PARTITION='compute'
export SLURM_GPUNODE_NUM_GPUS=1
export SLURM_GPUNODE_CPUS_PER_GPU=4
export SLURM_XPUNODE_SHELL='/bin/bash'
</syntaxhighlight>
Integrate the example script into your shell then run <code>gpunode</code>.
You can see partition options by running <code>sinfo</code>.
You might get an error like this: <code>groups: cannot find name for group ID 1506</code>. But things should still run fine. Check with <code>nvidia-smi</code>.
==== Useful Commands ====
Set a node state back to normal:
<syntaxhighlight lang="bash">
sudo scontrol update nodename='nodename' state=resume
</syntaxhighlight>
[[Category:K-Scale]]
dabca405a0ee660bb3855898bd7d0bfa0dd8010c
K-Scale Cluster
0
16
1258
1257
2024-05-25T00:12:26Z
Ben
2
/* SLURM Commands */
wikitext
text/x-wiki
The K-Scale Labs clusters are shared cluster for robotics research. This page contains notes on how to access the clusters.
== Onboarding ==
To get onboarded, you should send us the public key that you want to use and maybe your preferred username.
=== Lambda Cluster ===
After being onboarded, you should receive the following information:
* Your user ID (for this example, we'll use <code>stompy</code>)
* The jumphost ID (for this example, we'll use <code>127.0.0.1</code>)
* The cluster ID (for this example, we'll use <code>127.0.0.2</code>)
To connect, you should be able to use the following command:
<syntaxhighlight lang="bash">
ssh -o ProxyCommand="ssh -i ~/.ssh/id_rsa -W %h:%p stompy@127.0.0.1" stompy@127.0.0.2 -i ~/.ssh/id_rsa
</syntaxhighlight>
Note that <code>~/.ssh/id_rsa</code> should point to your private key file.
Alternatively, you can add the following to your SSH config file, which should allow you to connect directly,
Use your favorite editor to open the ssh config file (normally located at <code>~/.ssh/config</code> for Ubuntu) and paste the text:
<syntaxhighlight lang="text">
Host jumphost
User stompy
Hostname 127.0.0.1
IdentityFile ~/.ssh/id_rsa
Host cluster
User stompy
Hostname 127.0.0.2
ProxyJump jumphost
IdentityFile ~/.ssh/id_rsa
</syntaxhighlight>
After setting this up, you can use the command <code>ssh cluster</code> to directly connect.
You can also access via VS Code. Tutorial of using <code>ssh</code> in VS Code is [https://code.visualstudio.com/docs/remote/ssh-tutorial here].
Please inform us if you have any issues!
=== Notes ===
* You may need to restart <code>ssh</code> to get it working.
* You may be sharing your part of the cluster with other users. If so, it is a good idea to avoid using all the GPUs. If you're training models in PyTorch, you can do this using the <code>CUDA_VISIBLE_DEVICES</code> command.
* You should avoid storing data files and model checkpoints to your root directory. Instead, use the <code>/ephemeral</code> directory. Your home directory should come with a symlink to a subdirectory which you have write access to.
=== Andromeda Cluster ===
The Andromeda cluster is a different cluster which uses Slurm for job management. Authentication is different from the Lambda cluster - Ben will provide instructions directly.
Don't do anything computationally expensive on the main node or you will crash it for everyone. Instead, when you need to run some experiments, reserve a GPU (see below).
==== SLURM Commands ====
Show all currently running jobs:
<syntaxhighlight lang="bash">
squeue
</syntaxhighlight>
Show your own running jobs:
<syntaxhighlight lang="bash">
squeue --me
</syntaxhighlight>
Show the available partitions on the cluster:
<syntaxhighlight lang="bash">
sinfo
</syntaxhighlight>
You'll see something like this:
<syntaxhighlight lang="bash">
$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
compute* up infinite 8 idle compute-permanent-node-[68,285,493,580,625-626,749,801]
</syntaxhighlight>
This means:
* There is one compute node type, called <code>compute</code>
* There are 8 nodes of that type, all currently in <code>idle</code> state
* The node names are things like <code>compute-permanent-node-68</code>
To launch a job, use [https://slurm.schedmd.com/srun.html srun] or [https://slurm.schedmd.com/sbatch.html sbatch].
* '''srun''' runs a command directly with the requested resources
* '''sbatch''' queues the job to run when resources become available
For example, suppose I have the following Shell script:
<syntaxhighlight lang="bash">
#!/bin/bash
echo "Hello, world!"
nvidia-smi
</syntaxhighlight>
I can use <code>srun</code> to run this script with the following result:
<syntaxhighlight lang="bash">
$ srun --gpus 8 ./test.sh
Hello, world!
Sat May 25 00:02:23 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
... truncated
</syntaxhighlight>
Alternatively, I can queue the job using <code>sbatch</code>, which gives me the following result:
<syntaxhighlight lang="bash">
$ sbatch --gpus 16 test.sh
Submitted batch job 461
</syntaxhighlight>
We can specify <code>sbatch</code> options inside our shell script instead using the following syntax:
<syntaxhighlight lang="bash">
#!/bin/bash
#SBATCH --gpus 16
echo "Hello, world!"
</syntaxhighlight>
After launching the job, we can see it running using our original <code>squeue</code> command:
<syntaxhighlight lang="bash">
$ squeue --me
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
461 compute test.sh ben R 0:37 1 compute-permanent-node-285
</syntaxhighlight>
We can cancel an in-progress job by running <code>scancel</code>:
<syntaxhighlight lang="bash">
scancel 461
</syntaxhighlight>
[https://github.com/kscalelabs/mlfab/blob/master/mlfab/task/launchers/slurm.py#L262-L309 Here is a reference] <code>sbatch</code> script for launching machine learning jobs.
==== Reserving a GPU ====
Here is a script you can use for getting an interactive node through Slurm.
<syntaxhighlight lang="bash">
gpunode () {
local job_id=$(squeue -u $USER -h -t R -o %i -n gpunode)
if [[ -n $job_id ]]
then
echo "Attaching to job ID $job_id"
srun --jobid=$job_id --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --pty $SLURM_XPUNODE_SHELL
return 0
fi
echo "Creating new job"
srun --partition=$SLURM_GPUNODE_PARTITION --gpus=$SLURM_GPUNODE_NUM_GPUS --cpus-per-gpu=$SLURM_GPUNODE_CPUS_PER_GPU --interactive --job-name=gpunode --pty $SLURM_XPUNODE_SHELL
}
</syntaxhighlight>
Example env vars:
<syntaxhighlight lang="bash">
export SLURM_GPUNODE_PARTITION='compute'
export SLURM_GPUNODE_NUM_GPUS=1
export SLURM_GPUNODE_CPUS_PER_GPU=4
export SLURM_XPUNODE_SHELL='/bin/bash'
</syntaxhighlight>
Integrate the example script into your shell then run <code>gpunode</code>.
You can see partition options by running <code>sinfo</code>.
You might get an error like this: <code>groups: cannot find name for group ID 1506</code>. But things should still run fine. Check with <code>nvidia-smi</code>.
==== Useful Commands ====
Set a node state back to normal:
<syntaxhighlight lang="bash">
sudo scontrol update nodename='nodename' state=resume
</syntaxhighlight>
[[Category:K-Scale]]
dae4374908d27b9303810d3d3b3ac9e5027cdcbf
Allen's REINFORCE notes
0
282
1259
1254
2024-05-25T00:15:52Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" line>
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Loss Function ===
d38fd3ff4a7d1b56ed03f204961a783dbae32b22
1260
1259
2024-05-25T00:16:07Z
Allen12
15
/* Overview */
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Loss Function ===
a40964e9395dff252905217752aa69ef62e1b255
1261
1260
2024-05-25T00:59:49Z
Allen12
15
/* Loss Function */
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Loss Function ===
The goal of REINFORCE is to optimize the expected cumulative reward.
d242e14d5f6352801c6059a5ad09af3b2de02b6c
1263
1261
2024-05-25T23:08:28Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau ~ \pi_\theta}[R(\tau)]</math>
=== Loss Function ===
The goal of REINFORCE is to optimize the expected cumulative reward. We do so using gradient descent
b5ac0d05edd220f335fa2b7eb844a9bcb50ad83a
1264
1263
2024-05-25T23:12:04Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_theta(a_t | s_t) P(s_{t + 1} | s_t, a_t)
=== Loss Function ===
The goal of REINFORCE is to optimize the expected cumulative reward. We do so using gradient descent
2eb997c1d926cd06a73be4efdac8e6eb636b8f4f
1265
1264
2024-05-25T23:12:37Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>
=== Loss Function ===
The goal of REINFORCE is to optimize the expected cumulative reward. We do so using gradient descent
bea94d920a8d64123f1256d47ddf030d3de807a5
1266
1265
2024-05-25T23:29:57Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>.
Now we want to find the gradient of <math> J (\theta) </math>.
=== Loss Function ===
The goal of REINFORCE is to optimize the expected cumulative reward. We do so using gradient descent
90bbbd28264cae3c17a37242d6a4a931e3135c0a
1267
1266
2024-05-25T23:31:13Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
* [https://samuelebolotta.medium.com/2-deep-reinforcement-learning-policy-gradients-5a416a99700a Deep RL: Policy Gradients]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>.
Now we want to find the gradient of <math> J (\theta) </math>.
=== Loss Function ===
The goal of REINFORCE is to optimize the expected cumulative reward. We do so using gradient descent
18dd62b69ef6c8b85563989e3006e5f9b35fd255
1268
1267
2024-05-25T23:35:27Z
Allen12
15
/* Objective Function */
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
* [https://samuelebolotta.medium.com/2-deep-reinforcement-learning-policy-gradients-5a416a99700a Deep RL: Policy Gradients]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>.
Now we want to find the gradient of <math> J (\theta) </math>, namely
<math>\nabla_\theta \sum_\tau P(\tau | \theta) R(\tau) </math>. The important step here is called the Log Derivative Trick.
==Log Derivative Trick==
<math>\nabla
=== Loss Function ===
The goal of REINFORCE is to optimize the expected cumulative reward. We do so using gradient descent
7867cdc7fbcc918644f1be8b23f8540593b38545
1269
1268
2024-05-26T00:08:48Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
* [https://samuelebolotta.medium.com/2-deep-reinforcement-learning-policy-gradients-5a416a99700a Deep RL: Policy Gradients]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>.
Now we want to find the gradient of <math> J (\theta) </math>, namely
<math>\nabla_\theta \sum_\tau P(\tau | \theta) R(\tau) </math>. The important step here is called the Log Derivative Trick.
====Log Derivative Trick====
Suppose we'd like to find <math>\nabla_{x_1}\log(f(x_1, x_2, x_3, ...))</math>. By the chain rule this is equal to <math>\frac{\nabla_{x_1}f(x_1, x_2, x_3 ...)}{f(x_1, x_2, x_3 ...)}. Thus, by rearranging, we can take the gradient of any function with respect to some variable as <math>\nabla_{x_1}f(x_1, x_2, x_3, ...)= f(x_1, x_2, x_3,...)\nabla_{x_1}\log(f(x_1, x_2, x_3, ...)</math>.
=== Loss Function ===
The goal of REINFORCE is to optimize the expected cumulative reward. We do so using gradient descent
adf057324ba0ba49a5f3d5806c8e93b6c418d73a
1270
1269
2024-05-26T00:09:06Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
* [https://samuelebolotta.medium.com/2-deep-reinforcement-learning-policy-gradients-5a416a99700a Deep RL: Policy Gradients]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>.
Now we want to find the gradient of <math> J (\theta) </math>, namely
<math>\nabla_\theta \sum_\tau P(\tau | \theta) R(\tau) </math>. The important step here is called the Log Derivative Trick.
====Log Derivative Trick====
Suppose we'd like to find <math>\nabla_{x_1}\log(f(x_1, x_2, x_3, ...))</math>. By the chain rule this is equal to <math>\frac{\nabla_{x_1}f(x_1, x_2, x_3 ...)}{f(x_1, x_2, x_3 ...)}</math>. Thus, by rearranging, we can take the gradient of any function with respect to some variable as <math>\nabla_{x_1}f(x_1, x_2, x_3, ...)= f(x_1, x_2, x_3,...)\nabla_{x_1}\log(f(x_1, x_2, x_3, ...)</math>.
=== Loss Function ===
The goal of REINFORCE is to optimize the expected cumulative reward. We do so using gradient descent
f1735feff7c9680f5f7fe19b0be6c986676de3cf
1271
1270
2024-05-26T00:30:07Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
* [https://samuelebolotta.medium.com/2-deep-reinforcement-learning-policy-gradients-5a416a99700a Deep RL: Policy Gradients]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>.
Now we want to find the gradient of <math> J (\theta) </math>, namely
<math>\nabla_\theta \sum_\tau P(\tau | \theta) R(\tau) </math>. Since the reward function isn't a dependent on the parameters. We can rearrange: <math>\nabla_\theta \sum_\tau P(\tau | \theta) R(\tau) </math>. The next step here is what's called the Log Derivative Trick.
====Log Derivative Trick====
Suppose we'd like to find <math>\nabla_{x_1}\log(f(x_1, x_2, x_3, ...))</math>. By the chain rule this is equal to <math>\frac{\nabla_{x_1}f(x_1, x_2, x_3 ...)}{f(x_1, x_2, x_3 ...)}</math>. Thus, by rearranging, we can take the gradient of any function with respect to some variable as <math>\nabla_{x_1}f(x_1, x_2, x_3, ...)= f(x_1, x_2, x_3,...)\nabla_{x_1}\log(f(x_1, x_2, x_3, ...)</math>.
Thus
=== Loss Function ===
The goal of REINFORCE is to optimize the expected cumulative reward. We do so using gradient descent
5d2b6e16b84caba2ca446308ac8df6f38f3e38f6
1272
1271
2024-05-26T00:32:41Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
* [https://samuelebolotta.medium.com/2-deep-reinforcement-learning-policy-gradients-5a416a99700a Deep RL: Policy Gradients]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>.
Now we want to find the gradient of <math> J (\theta) </math>, namely
<math>\nabla_\theta \sum_\tau P(\tau | \theta) R(\tau) </math>. Since the reward function isn't a dependent on the parameters. We can rearrange: <math> \sum_\tau R(\tau) \nabla_\theta P(\tau | \theta) </math>. The next step here is what's called the Log Derivative Trick.
Suppose we'd like to find <math>\nabla_{x_1}\log(f(x_1, x_2, x_3, ...))</math>. By the chain rule this is equal to <math>\frac{\nabla_{x_1}f(x_1, x_2, x_3 ...)}{f(x_1, x_2, x_3 ...)}</math>. Thus, by rearranging, we can take the gradient of any function with respect to some variable as <math>\nabla_{x_1}f(x_1, x_2, x_3, ...)= f(x_1, x_2, x_3,...)\nabla_{x_1}\log(f(x_1, x_2, x_3, ...)</math>.
Thus, using this idea, we can rewrite our gradient as <math> \sum_\tau R(\tau) p(\tau | \theta) \nabla_\theta \log P(\tau | \theta) </math>
=== Loss Function ===
The goal of REINFORCE is to optimize the expected cumulative reward. We do so using gradient descent
f261c3bcf415668b559740287666988d1d664d2e
1273
1272
2024-05-26T00:35:11Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
* [https://samuelebolotta.medium.com/2-deep-reinforcement-learning-policy-gradients-5a416a99700a Deep RL: Policy Gradients]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>.
Now we want to find the gradient of <math> J (\theta) </math>, namely
<math>\nabla_\theta \sum_\tau P(\tau | \theta) R(\tau) </math>. Since the reward function isn't a dependent on the parameters. We can rearrange: <math> \sum_\tau R(\tau) \nabla_\theta P(\tau | \theta) </math>. The next step here is what's called the Log Derivative Trick.
Suppose we'd like to find <math>\nabla_{x_1}\log(f(x_1, x_2, x_3, ...))</math>. By the chain rule this is equal to <math>\frac{\nabla_{x_1}f(x_1, x_2, x_3 ...)}{f(x_1, x_2, x_3 ...)}</math>. Thus, by rearranging, we can take the gradient of any function with respect to some variable as <math>\nabla_{x_1}f(x_1, x_2, x_3, ...)= f(x_1, x_2, x_3,...)\nabla_{x_1}\log(f(x_1, x_2, x_3, ...)</math>.
Thus, using this idea, we can rewrite our gradient as <math> \sum_\tau R(\tau) p(\tau | \theta) \nabla_\theta \log P(\tau | \theta) </math>. Finally, using the definition of expectation again, we have <math> \nabla_\theta J(\theta) = E_{\tau \sim \pi_\theta} \left[\sum_{t=0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) \right]
=== Loss Function ===
The goal of REINFORCE is to optimize the expected cumulative reward. We do so using gradient descent
7c7c9d881c1913efab52616f0e0aea083706d201
1274
1273
2024-05-26T00:35:24Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
* [https://samuelebolotta.medium.com/2-deep-reinforcement-learning-policy-gradients-5a416a99700a Deep RL: Policy Gradients]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>.
Now we want to find the gradient of <math> J (\theta) </math>, namely
<math>\nabla_\theta \sum_\tau P(\tau | \theta) R(\tau) </math>. Since the reward function isn't a dependent on the parameters. We can rearrange: <math> \sum_\tau R(\tau) \nabla_\theta P(\tau | \theta) </math>. The next step here is what's called the Log Derivative Trick.
Suppose we'd like to find <math>\nabla_{x_1}\log(f(x_1, x_2, x_3, ...))</math>. By the chain rule this is equal to <math>\frac{\nabla_{x_1}f(x_1, x_2, x_3 ...)}{f(x_1, x_2, x_3 ...)}</math>. Thus, by rearranging, we can take the gradient of any function with respect to some variable as <math>\nabla_{x_1}f(x_1, x_2, x_3, ...)= f(x_1, x_2, x_3,...)\nabla_{x_1}\log(f(x_1, x_2, x_3, ...)</math>.
Thus, using this idea, we can rewrite our gradient as <math> \sum_\tau R(\tau) p(\tau | \theta) \nabla_\theta \log P(\tau | \theta) </math>. Finally, using the definition of expectation again, we have <math> \nabla_\theta J(\theta) = E_{\tau \sim \pi_\theta} \left[\sum_{t=0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) \right] </math>
=== Loss Function ===
The goal of REINFORCE is to optimize the expected cumulative reward. We do so using gradient descent
f28b03b5b6be5159a2a62c4e9655bf6a54854d10
1275
1274
2024-05-26T00:46:17Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
* [https://samuelebolotta.medium.com/2-deep-reinforcement-learning-policy-gradients-5a416a99700a Deep RL: Policy Gradients]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>.
Now we want to find the gradient of <math> J (\theta) </math>, namely
<math>\nabla_\theta \sum_\tau P(\tau | \theta) R(\tau) </math>. Since the reward function isn't a dependent on the parameters. We can rearrange: <math> \sum_\tau R(\tau) \nabla_\theta P(\tau | \theta) </math>. The next step here is what's called the Log Derivative Trick.
Suppose we'd like to find <math>\nabla_{x_1}\log(f(x_1, x_2, x_3, ...))</math>. By the chain rule this is equal to <math>\frac{\nabla_{x_1}f(x_1, x_2, x_3 ...)}{f(x_1, x_2, x_3 ...)}</math>. Thus, by rearranging, we can take the gradient of any function with respect to some variable as <math>\nabla_{x_1}f(x_1, x_2, x_3, ...)= f(x_1, x_2, x_3,...)\nabla_{x_1}\log(f(x_1, x_2, x_3, ...)</math>.
Thus, using this idea, we can rewrite our gradient as <math> \sum_\tau R(\tau) p(\tau | \theta) \nabla_\theta \log P(\tau | \theta) </math>. Finally, using the definition of expectation again, we have <math> \nabla_\theta J(\theta) = E_{\tau \sim \pi_\theta} \left[\sum_{t=0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) \right] </math>
=== Loss Computation ===
It is tricky for us to give our policy the notion of "total" reward and "total" probability. Thus, we desire to change these values parameterized by <math> \tau </math> to instead be parameterized by t. That is, instead of examining the behavior of the entire episode, we want to create a summation over timesteps. We know that <math> R(\tau) </math> is the total reward over all timesteps. Thus, we can rewrite the <math> R(\tau) </math> component at some timestep t as <math> \gamma^{T - t}r_t </math>, where gamma is our discount factor. Further, we recall that the probability of the trajectory occuring given the policy is <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>. Since the probabilities of <math> P(s_0) </math> and <math> P(s_{t+a} | s_t, a,t) </math> are determined by the environment and independent of the policy, their gradient is zero. Recognizing this, and further recognizing that multiplication of probabilities in log space is equal to the sum of the logarithm of each of the probabilities. We get our final gradient expression \sum_\tau P(\tau | \theta) R \tau) \sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t).
d609021eee34d80461bd431f0ba4563d11d3b0fd
1276
1275
2024-05-26T00:46:36Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
* [https://samuelebolotta.medium.com/2-deep-reinforcement-learning-policy-gradients-5a416a99700a Deep RL: Policy Gradients]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>.
Now we want to find the gradient of <math> J (\theta) </math>, namely
<math>\nabla_\theta \sum_\tau P(\tau | \theta) R(\tau) </math>. Since the reward function isn't a dependent on the parameters. We can rearrange: <math> \sum_\tau R(\tau) \nabla_\theta P(\tau | \theta) </math>. The next step here is what's called the Log Derivative Trick.
Suppose we'd like to find <math>\nabla_{x_1}\log(f(x_1, x_2, x_3, ...))</math>. By the chain rule this is equal to <math>\frac{\nabla_{x_1}f(x_1, x_2, x_3 ...)}{f(x_1, x_2, x_3 ...)}</math>. Thus, by rearranging, we can take the gradient of any function with respect to some variable as <math>\nabla_{x_1}f(x_1, x_2, x_3, ...)= f(x_1, x_2, x_3,...)\nabla_{x_1}\log(f(x_1, x_2, x_3, ...)</math>.
Thus, using this idea, we can rewrite our gradient as <math> \sum_\tau R(\tau) p(\tau | \theta) \nabla_\theta \log P(\tau | \theta) </math>. Finally, using the definition of expectation again, we have <math> \nabla_\theta J(\theta) = E_{\tau \sim \pi_\theta} \left[\sum_{t=0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) \right] </math>
=== Loss Computation ===
It is tricky for us to give our policy the notion of "total" reward and "total" probability. Thus, we desire to change these values parameterized by <math> \tau </math> to instead be parameterized by t. That is, instead of examining the behavior of the entire episode, we want to create a summation over timesteps. We know that <math> R(\tau) </math> is the total reward over all timesteps. Thus, we can rewrite the <math> R(\tau) </math> component at some timestep t as <math> \gamma^{T - t}r_t </math>, where gamma is our discount factor. Further, we recall that the probability of the trajectory occuring given the policy is <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>. Since the probabilities of <math> P(s_0) </math> and <math> P(s_{t+a} | s_t, a,t) </math> are determined by the environment and independent of the policy, their gradient is zero. Recognizing this, and further recognizing that multiplication of probabilities in log space is equal to the sum of the logarithm of each of the probabilities. We get our final gradient expression <math> \sum_\tau P(\tau | \theta) R \tau) \sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) </math>.
7683622b4988ed67dce1e1185dd66e73380e4758
1277
1276
2024-05-26T00:52:02Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
* [https://samuelebolotta.medium.com/2-deep-reinforcement-learning-policy-gradients-5a416a99700a Deep RL: Policy Gradients]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>.
Now we want to find the gradient of <math> J (\theta) </math>, namely
<math>\nabla_\theta \sum_\tau P(\tau | \theta) R(\tau) </math>. Since the reward function isn't a dependent on the parameters. We can rearrange: <math> \sum_\tau R(\tau) \nabla_\theta P(\tau | \theta) </math>. The next step here is what's called the Log Derivative Trick.
Suppose we'd like to find <math>\nabla_{x_1}\log(f(x_1, x_2, x_3, ...))</math>. By the chain rule this is equal to <math>\frac{\nabla_{x_1}f(x_1, x_2, x_3 ...)}{f(x_1, x_2, x_3 ...)}</math>. Thus, by rearranging, we can take the gradient of any function with respect to some variable as <math>\nabla_{x_1}f(x_1, x_2, x_3, ...)= f(x_1, x_2, x_3,...)\nabla_{x_1}\log(f(x_1, x_2, x_3, ...)</math>.
Thus, using this idea, we can rewrite our gradient as <math> \sum_\tau R(\tau) p(\tau | \theta) \nabla_\theta \log P(\tau | \theta) </math>. Finally, using the definition of expectation again, we have <math> \nabla_\theta J(\theta) = E_{\tau \sim \pi_\theta} \left[\sum_{t=0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) \right] </math>
=== Loss Computation ===
It is tricky for us to give our policy the notion of "total" reward and "total" probability. Thus, we desire to change these values parameterized by <math> \tau </math> to instead be parameterized by t. That is, instead of examining the behavior of the entire episode, we want to create a summation over timesteps. We know that <math> R(\tau) </math> is the total reward over all timesteps. Thus, we can rewrite the <math> R(\tau) </math> component at some timestep t as <math> \gamma^{T - t}r_t </math>, where gamma is our discount factor. Further, we recall that the probability of the trajectory occurring given the policy is <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>. Since the probabilities of <math> P(s_0) </math> and <math> P(s_{t+a} | s_t, a,t) </math> are determined by the environment and independent of the policy, their gradient is zero. Recognizing this, and further recognizing that multiplication of probabilities in log space is equal to the sum of the logarithm of each of the probabilities. We get our final gradient expression <math> \sum_\tau P(\tau | \theta) R( \tau) \sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) </math>.
Rewriting this into an expectation, we have \nabla_theta J (\theta) = E_{\tau \sim \pi_\theta}\left[R(\tau)\sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t)\right]. Using the formula for discounted reward, we have our final formula E_{\tau \sim \pi_\theta}\left[\sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) \gamma^{T - t}r_t \right]
61412ac5a1624729e555cdcf77402f64cc6f0031
1278
1277
2024-05-26T00:52:28Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
* [https://samuelebolotta.medium.com/2-deep-reinforcement-learning-policy-gradients-5a416a99700a Deep RL: Policy Gradients]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>.
Now we want to find the gradient of <math> J (\theta) </math>, namely
<math>\nabla_\theta \sum_\tau P(\tau | \theta) R(\tau) </math>. Since the reward function isn't a dependent on the parameters. We can rearrange: <math> \sum_\tau R(\tau) \nabla_\theta P(\tau | \theta) </math>. The next step here is what's called the Log Derivative Trick.
Suppose we'd like to find <math>\nabla_{x_1}\log(f(x_1, x_2, x_3, ...))</math>. By the chain rule this is equal to <math>\frac{\nabla_{x_1}f(x_1, x_2, x_3 ...)}{f(x_1, x_2, x_3 ...)}</math>. Thus, by rearranging, we can take the gradient of any function with respect to some variable as <math>\nabla_{x_1}f(x_1, x_2, x_3, ...)= f(x_1, x_2, x_3,...)\nabla_{x_1}\log(f(x_1, x_2, x_3, ...)</math>.
Thus, using this idea, we can rewrite our gradient as <math> \sum_\tau R(\tau) p(\tau | \theta) \nabla_\theta \log P(\tau | \theta) </math>. Finally, using the definition of expectation again, we have <math> \nabla_\theta J(\theta) = E_{\tau \sim \pi_\theta} \left[\sum_{t=0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) \right] </math>
=== Loss Computation ===
It is tricky for us to give our policy the notion of "total" reward and "total" probability. Thus, we desire to change these values parameterized by <math> \tau </math> to instead be parameterized by t. That is, instead of examining the behavior of the entire episode, we want to create a summation over timesteps. We know that <math> R(\tau) </math> is the total reward over all timesteps. Thus, we can rewrite the <math> R(\tau) </math> component at some timestep t as <math> \gamma^{T - t}r_t </math>, where gamma is our discount factor. Further, we recall that the probability of the trajectory occurring given the policy is <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>. Since the probabilities of <math> P(s_0) </math> and <math> P(s_{t+a} | s_t, a,t) </math> are determined by the environment and independent of the policy, their gradient is zero. Recognizing this, and further recognizing that multiplication of probabilities in log space is equal to the sum of the logarithm of each of the probabilities. We get our final gradient expression <math> \sum_\tau P(\tau | \theta) R( \tau) \sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) </math>.
Rewriting this into an expectation, we have <math> \nabla_theta J (\theta) = E_{\tau \sim \pi_\theta}\left[R(\tau)\sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t)\right] </math>. Using the formula for discounted reward, we have our final formula <math> E_{\tau \sim \pi_\theta}\left[\sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) \gamma^{T - t}r_t \right] </math>.
df864b9df362cb671573f236c630dba167b05ed3
1279
1278
2024-05-26T00:53:11Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
* [https://samuelebolotta.medium.com/2-deep-reinforcement-learning-policy-gradients-5a416a99700a Deep RL: Policy Gradients]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>.
Now we want to find the gradient of <math> J (\theta) </math>, namely
<math>\nabla_\theta \sum_\tau P(\tau | \theta) R(\tau) </math>. Since the reward function isn't a dependent on the parameters. We can rearrange: <math> \sum_\tau R(\tau) \nabla_\theta P(\tau | \theta) </math>. The next step here is what's called the Log Derivative Trick.
Suppose we'd like to find <math>\nabla_{x_1}\log(f(x_1, x_2, x_3, ...))</math>. By the chain rule this is equal to <math>\frac{\nabla_{x_1}f(x_1, x_2, x_3 ...)}{f(x_1, x_2, x_3 ...)}</math>. Thus, by rearranging, we can take the gradient of any function with respect to some variable as <math>\nabla_{x_1}f(x_1, x_2, x_3, ...)= f(x_1, x_2, x_3,...)\nabla_{x_1}\log(f(x_1, x_2, x_3, ...)</math>.
Thus, using this idea, we can rewrite our gradient as <math> \sum_\tau R(\tau) p(\tau | \theta) \nabla_\theta \log P(\tau | \theta) </math>.
=== Loss Computation ===
It is tricky for us to give our policy the notion of "total" reward and "total" probability. Thus, we desire to change these values parameterized by <math> \tau </math> to instead be parameterized by t. That is, instead of examining the behavior of the entire episode, we want to create a summation over timesteps. We know that <math> R(\tau) </math> is the total reward over all timesteps. Thus, we can rewrite the <math> R(\tau) </math> component at some timestep t as <math> \gamma^{T - t}r_t </math>, where gamma is our discount factor. Further, we recall that the probability of the trajectory occurring given the policy is <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>. Since the probabilities of <math> P(s_0) </math> and <math> P(s_{t+a} | s_t, a,t) </math> are determined by the environment and independent of the policy, their gradient is zero. Recognizing this, and further recognizing that multiplication of probabilities in log space is equal to the sum of the logarithm of each of the probabilities. We get our final gradient expression <math> \sum_\tau P(\tau | \theta) R( \tau) \sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) </math>.
Rewriting this into an expectation, we have <math> \nabla_theta J (\theta) = E_{\tau \sim \pi_\theta}\left[R(\tau)\sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t)\right] </math>. Using the formula for discounted reward, we have our final formula <math> E_{\tau \sim \pi_\theta}\left[\sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) \gamma^{T - t}r_t \right] </math>.
f46b79ce2ac6a396da4734d914443efd360dff61
1280
1279
2024-05-26T01:13:05Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
* [https://samuelebolotta.medium.com/2-deep-reinforcement-learning-policy-gradients-5a416a99700a Deep RL: Policy Gradients]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>.
Now we want to find the gradient of <math> J (\theta) </math>, namely
<math>\nabla_\theta \sum_\tau P(\tau | \theta) R(\tau) </math>. Since the reward function isn't a dependent on the parameters. We can rearrange: <math> \sum_\tau R(\tau) \nabla_\theta P(\tau | \theta) </math>. The next step here is what's called the Log Derivative Trick.
Suppose we'd like to find <math>\nabla_{x_1}\log(f(x_1, x_2, x_3, ...))</math>. By the chain rule this is equal to <math>\frac{\nabla_{x_1}f(x_1, x_2, x_3 ...)}{f(x_1, x_2, x_3 ...)}</math>. Thus, by rearranging, we can take the gradient of any function with respect to some variable as <math>\nabla_{x_1}f(x_1, x_2, x_3, ...)= f(x_1, x_2, x_3,...)\nabla_{x_1}\log(f(x_1, x_2, x_3, ...)</math>.
Thus, using this idea, we can rewrite our gradient as <math> \sum_\tau R(\tau) p(\tau | \theta) \nabla_\theta \log P(\tau | \theta) </math>.
=== Loss Computation ===
It is tricky for us to give our policy the notion of "total" reward and "total" probability. Thus, we desire to change these values parameterized by <math> \tau </math> to instead be parameterized by t. That is, instead of examining the behavior of the entire episode, we want to create a summation over timesteps. We know that <math> R(\tau) </math> is the total reward over all timesteps. Thus, we can rewrite the <math> R(\tau) </math> component at some timestep t as <math> \gamma^{T - t}r_t </math>, where gamma is our discount factor. Further, we recall that the probability of the trajectory occurring given the policy is <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>. Since the probabilities of <math> P(s_0) </math> and <math> P(s_{t+a} | s_t, a,t) </math> are determined by the environment and independent of the policy, their gradient is zero. Recognizing this, and further recognizing that multiplication of probabilities in log space is equal to the sum of the logarithm of each of the probabilities. We get our final gradient expression <math> \sum_\tau P(\tau | \theta) R( \tau) \sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) </math>.
Rewriting this into an expectation, we have <math> \nabla_\theta J (\theta) = E_{\tau \sim \pi_\theta}\left[R(\tau)\sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t)\right] </math>. Using the formula for discounted reward, we have our final formula <math> E_{\tau \sim \pi_\theta}\left[\sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) \gamma^{T - t}r_t \right] </math>.
222d420eb14650bb883fd64580f21825217b3725
1281
1280
2024-05-26T01:17:05Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
* [https://samuelebolotta.medium.com/2-deep-reinforcement-learning-policy-gradients-5a416a99700a Deep RL: Policy Gradients]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>.
Now we want to find the gradient of <math> J (\theta) </math>, namely
<math>\nabla_\theta \sum_\tau P(\tau | \theta) R(\tau) </math>. Since the reward function isn't a dependent on the parameters. We can rearrange: <math> \sum_\tau R(\tau) \nabla_\theta P(\tau | \theta) </math>. The next step here is what's called the Log Derivative Trick.
Suppose we'd like to find <math>\nabla_{x_1}\log(f(x_1, x_2, x_3, ...))</math>. By the chain rule this is equal to <math>\frac{\nabla_{x_1}f(x_1, x_2, x_3 ...)}{f(x_1, x_2, x_3 ...)}</math>. Thus, by rearranging, we can take the gradient of any function with respect to some variable as <math>\nabla_{x_1}f(x_1, x_2, x_3, ...)= f(x_1, x_2, x_3,...)\nabla_{x_1}\log(f(x_1, x_2, x_3, ...)</math>.
Thus, using this idea, we can rewrite our gradient as <math> \sum_\tau R(\tau) p(\tau | \theta) \nabla_\theta \log P(\tau | \theta) </math>.
=== Loss Computation ===
It is tricky for us to give our policy the notion of "total" reward and "total" probability. Thus, we desire to change these values parameterized by <math> \tau </math> to instead be parameterized by t. That is, instead of examining the behavior of the entire episode, we want to create a summation over timesteps. We know that <math> R(\tau) </math> is the total reward over all timesteps. Thus, we can rewrite the <math> R(\tau) </math> component at some timestep t as <math> \gamma^{T - t}r_t </math>, where gamma is our discount factor. Further, we recall that the probability of the trajectory occurring given the policy is <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>. Since the probabilities of <math> P(s_0) </math> and <math> P(s_{t+a} | s_t, a,t) </math> are determined by the environment and independent of the policy, their gradient is zero. Recognizing this, and further recognizing that multiplication of probabilities in log space is equal to the sum of the logarithm of each of the probabilities. We get our final gradient expression <math> \sum_\tau P(\tau | \theta) R( \tau) \sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) </math>.
Rewriting this into an expectation, we have <math> \nabla_\theta J (\theta) = E_{\tau \sim \pi_\theta}\left[R(\tau)\sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t)\right] </math>. Using the formula for discounted reward, we have our final formula <math> E_{\tau \sim \pi_\theta}\left[\sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) \gamma^{T - t}r_t \right] </math>. This is why our loss is equal to -\sum_{t = 0}^T \log \pi_\theta (a_t | s_t) \gamma^{T - t}r_t \right
e23583aa8b2f26aef53c64e55ed7c7c0c41f7648
1282
1281
2024-05-26T01:17:53Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
* [https://samuelebolotta.medium.com/2-deep-reinforcement-learning-policy-gradients-5a416a99700a Deep RL: Policy Gradients]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>.
Now we want to find the gradient of <math> J (\theta) </math>, namely
<math>\nabla_\theta \sum_\tau P(\tau | \theta) R(\tau) </math>. Since the reward function isn't a dependent on the parameters. We can rearrange: <math> \sum_\tau R(\tau) \nabla_\theta P(\tau | \theta) </math>. The next step here is what's called the Log Derivative Trick.
Suppose we'd like to find <math>\nabla_{x_1}\log(f(x_1, x_2, x_3, ...))</math>. By the chain rule this is equal to <math>\frac{\nabla_{x_1}f(x_1, x_2, x_3 ...)}{f(x_1, x_2, x_3 ...)}</math>. Thus, by rearranging, we can take the gradient of any function with respect to some variable as <math>\nabla_{x_1}f(x_1, x_2, x_3, ...)= f(x_1, x_2, x_3,...)\nabla_{x_1}\log(f(x_1, x_2, x_3, ...)</math>.
Thus, using this idea, we can rewrite our gradient as <math> \sum_\tau R(\tau) p(\tau | \theta) \nabla_\theta \log P(\tau | \theta) </math>.
=== Loss Computation ===
It is tricky for us to give our policy the notion of "total" reward and "total" probability. Thus, we desire to change these values parameterized by <math> \tau </math> to instead be parameterized by t. That is, instead of examining the behavior of the entire episode, we want to create a summation over timesteps. We know that <math> R(\tau) </math> is the total reward over all timesteps. Thus, we can rewrite the <math> R(\tau) </math> component at some timestep t as <math> \gamma^{T - t}r_t </math>, where gamma is our discount factor. Further, we recall that the probability of the trajectory occurring given the policy is <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>. Since the probabilities of <math> P(s_0) </math> and <math> P(s_{t+a} | s_t, a,t) </math> are determined by the environment and independent of the policy, their gradient is zero. Recognizing this, and further recognizing that multiplication of probabilities in log space is equal to the sum of the logarithm of each of the probabilities. We get our final gradient expression <math> \sum_\tau P(\tau | \theta) R( \tau) \sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) </math>.
Rewriting this into an expectation, we have <math> \nabla_\theta J (\theta) = E_{\tau \sim \pi_\theta}\left[R(\tau)\sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t)\right] </math>. Using the formula for discounted reward, we have our final formula <math> E_{\tau \sim \pi_\theta}\left[\sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) \gamma^{T - t}r_t \right] </math>. This is why our loss is equal to <math>-\sum_{t = 0}^T \log \pi_\theta (a_t | s_t) \gamma^{T - t}r_t \right</math>.
d62c0c465dcd8ea55dfdef86b6478c1ccf274c07
1283
1282
2024-05-26T01:19:32Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
* [https://samuelebolotta.medium.com/2-deep-reinforcement-learning-policy-gradients-5a416a99700a Deep RL: Policy Gradients]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>.
Now we want to find the gradient of <math> J (\theta) </math>, namely
<math>\nabla_\theta \sum_\tau P(\tau | \theta) R(\tau) </math>. Since the reward function isn't a dependent on the parameters. We can rearrange: <math> \sum_\tau R(\tau) \nabla_\theta P(\tau | \theta) </math>. The next step here is what's called the Log Derivative Trick.
Suppose we'd like to find <math>\nabla_{x_1}\log(f(x_1, x_2, x_3, ...))</math>. By the chain rule this is equal to <math>\frac{\nabla_{x_1}f(x_1, x_2, x_3 ...)}{f(x_1, x_2, x_3 ...)}</math>. Thus, by rearranging, we can take the gradient of any function with respect to some variable as <math>\nabla_{x_1}f(x_1, x_2, x_3, ...)= f(x_1, x_2, x_3,...)\nabla_{x_1}\log(f(x_1, x_2, x_3, ...)</math>.
Thus, using this idea, we can rewrite our gradient as <math> \sum_\tau R(\tau) p(\tau | \theta) \nabla_\theta \log P(\tau | \theta) </math>.
=== Loss Computation ===
It is tricky for us to give our policy the notion of "total" reward and "total" probability. Thus, we desire to change these values parameterized by <math> \tau </math> to instead be parameterized by t. That is, instead of examining the behavior of the entire episode, we want to create a summation over timesteps. We know that <math> R(\tau) </math> is the total reward over all timesteps. Thus, we can rewrite the <math> R(\tau) </math> component at some timestep t as <math> \gamma^{T - t}r_t </math>, where gamma is our discount factor. Further, we recall that the probability of the trajectory occurring given the policy is <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>. Since the probabilities of <math> P(s_0) </math> and <math> P(s_{t+a} | s_t, a,t) </math> are determined by the environment and independent of the policy, their gradient is zero. Recognizing this, and further recognizing that multiplication of probabilities in log space is equal to the sum of the logarithm of each of the probabilities. We get our final gradient expression <math> \sum_\tau P(\tau | \theta) R( \tau) \sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) </math>.
Rewriting this into an expectation, we have <math> \nabla_\theta J (\theta) = E_{\tau \sim \pi_\theta}\left[R(\tau)\sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t)\right] </math>. Using the formula for discounted reward, we have our final formula <math> E_{\tau \sim \pi_\theta}\left[\sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) \gamma^{T - t}r_t \right] </math>. This is why our loss is equal to <math> -\sum_{t = 0}^T \log \pi_\theta (a_t | s_t) \gamma^{T - t}r_t </math>, since using the chain rule to take its derivative gives us the formula for the gradient..
79c56fcbefe674013070da143ef1f9686ce4cf9d
1284
1283
2024-05-26T01:22:57Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
* [https://samuelebolotta.medium.com/2-deep-reinforcement-learning-policy-gradients-5a416a99700a Deep RL: Policy Gradients]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>.
Now we want to find the gradient of <math> J (\theta) </math>, namely
<math>\nabla_\theta \sum_\tau P(\tau | \theta) R(\tau) </math>. Since the reward function isn't a dependent on the parameters. We can rearrange: <math> \sum_\tau R(\tau) \nabla_\theta P(\tau | \theta) </math>. The next step here is what's called the Log Derivative Trick.
Suppose we'd like to find <math>\nabla_{x_1}\log(f(x_1, x_2, x_3, ...))</math>. By the chain rule this is equal to <math>\frac{\nabla_{x_1}f(x_1, x_2, x_3 ...)}{f(x_1, x_2, x_3 ...)}</math>. Thus, by rearranging, we can take the gradient of any function with respect to some variable as <math>\nabla_{x_1}f(x_1, x_2, x_3, ...)= f(x_1, x_2, x_3,...)\nabla_{x_1}\log(f(x_1, x_2, x_3, ...)</math>.
Thus, using this idea, we can rewrite our gradient as <math> \sum_\tau R(\tau) p(\tau | \theta) \nabla_\theta \log P(\tau | \theta) </math>.
=== Loss Computation ===
It is tricky for us to give our policy the notion of "total" reward and "total" probability. Thus, we desire to change these values parameterized by <math> \tau </math> to instead be parameterized by t. That is, instead of examining the behavior of the entire episode, we want to create a summation over timesteps. We know that <math> R(\tau) </math> is the total reward over all timesteps. Thus, we can rewrite the <math> R(\tau) </math> component at some timestep t as <math> \gamma^{T - t}r_t </math>, where gamma is our discount factor. Further, we recall that the probability of the trajectory occurring given the policy is <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>. Since the probabilities of <math> P(s_0) </math> and <math> P(s_{t+a} | s_t, a,t) </math> are determined by the environment and independent of the policy, their gradient is zero. Recognizing this, and further recognizing that multiplication of probabilities in log space is equal to the sum of the logarithm of each of the probabilities. We get our final gradient expression <math> \sum_\tau P(\tau | \theta) R( \tau) \sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) </math>.
Rewriting this into an expectation, we have <math> \nabla_\theta J (\theta) = E_{\tau \sim \pi_\theta}\left[R(\tau)\sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t)\right] </math>. Using the formula for discounted reward, we have our final formula <math> E_{\tau \sim \pi_\theta}\left[\sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) \gamma^{T - t}r_t \right] </math>. This is why our loss is equal to <math> -\sum_{t = 0}^T \log \pi_\theta (a_t | s_t) \gamma^{T - t}r_t </math>, since using the chain rule to take its derivative gives us the formula for the gradient for our backwards pass (see Dennis' Optimization Notes).
70676ee0b6d291bfa62551204029135a79c52b7c
1285
1284
2024-05-26T01:23:18Z
Allen12
15
wikitext
text/x-wiki
Allen's REINFORCE notes
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdf RLbook2020]
* [https://samuelebolotta.medium.com/2-deep-reinforcement-learning-policy-gradients-5a416a99700a Deep RL: Policy Gradients]
[[Category:Reinforcement Learning]]
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
<syntaxhighlight lang="bash" >
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
While not terminated:
Get observation from environment
Use policy network to map observation to action distribution
Randomly sample one action from action distribution
Compute logarithmic probability of that action occurring
Step environment using action and store reward
Calculate loss over entire trajectory as function of probabilities and rewards
Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
Based on the loss, use a gradient descent policy to update weights
</syntaxhighlight>
=== Objective Function ===
The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use <math>R(\tau)</math> to denote the total reward over some trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the probability of a given trajectory occurring can further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>.
Now we want to find the gradient of <math> J (\theta) </math>, namely
<math>\nabla_\theta \sum_\tau P(\tau | \theta) R(\tau) </math>. Since the reward function isn't a dependent on the parameters. We can rearrange: <math> \sum_\tau R(\tau) \nabla_\theta P(\tau | \theta) </math>. The next step here is what's called the Log Derivative Trick.
Suppose we'd like to find <math>\nabla_{x_1}\log(f(x_1, x_2, x_3, ...))</math>. By the chain rule this is equal to <math>\frac{\nabla_{x_1}f(x_1, x_2, x_3 ...)}{f(x_1, x_2, x_3 ...)}</math>. Thus, by rearranging, we can take the gradient of any function with respect to some variable as <math>\nabla_{x_1}f(x_1, x_2, x_3, ...)= f(x_1, x_2, x_3,...)\nabla_{x_1}\log(f(x_1, x_2, x_3, ...)</math>.
Thus, using this idea, we can rewrite our gradient as <math> \sum_\tau R(\tau) p(\tau | \theta) \nabla_\theta \log P(\tau | \theta) </math>.
=== Loss Computation ===
It is tricky for us to give our policy the notion of "total" reward and "total" probability. Thus, we desire to change these values parameterized by <math> \tau </math> to instead be parameterized by t. That is, instead of examining the behavior of the entire episode, we want to create a summation over timesteps. We know that <math> R(\tau) </math> is the total reward over all timesteps. Thus, we can rewrite the <math> R(\tau) </math> component at some timestep t as <math> \gamma^{T - t}r_t </math>, where gamma is our discount factor. Further, we recall that the probability of the trajectory occurring given the policy is <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>. Since the probabilities of <math> P(s_0) </math> and <math> P(s_{t+a} | s_t, a,t) </math> are determined by the environment and independent of the policy, their gradient is zero. Recognizing this, and further recognizing that multiplication of probabilities in log space is equal to the sum of the logarithm of each of the probabilities, we get our final gradient expression <math> \sum_\tau P(\tau | \theta) R( \tau) \sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) </math>.
Rewriting this into an expectation, we have <math> \nabla_\theta J (\theta) = E_{\tau \sim \pi_\theta}\left[R(\tau)\sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t)\right] </math>. Using the formula for discounted reward, we have our final formula <math> E_{\tau \sim \pi_\theta}\left[\sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) \gamma^{T - t}r_t \right] </math>. This is why our loss is equal to <math> -\sum_{t = 0}^T \log \pi_\theta (a_t | s_t) \gamma^{T - t}r_t </math>, since using the chain rule to take its derivative gives us the formula for the gradient for our backwards pass (see Dennis' Optimization Notes).
68c7bebc36ece8d53de937abeb18cecbcb18cd3f
Dennis' Optimization Notes
0
284
1262
2024-05-25T04:57:12Z
Dennisc
27
Initial notes on optimization
wikitext
text/x-wiki
Notes of various riffs on Gradient Descent from a perspective of neural networks.
[[Category: Gradient Descent]]
<span id="a-review-of-standard-gradient-descent"></span>
== A review of standard Gradient Descent ==
The goal of Gradient Descent is to minimize a loss function <math display="inline">L</math>. To be more specific, if <math display="inline">L : \mathbb R^n \to \mathbb R</math> is a differentiable multivariate function, we want to find the vector <math display="inline">w</math> that minimizes <math display="inline">L(w)</math>.
Given an initial vector <math display="inline">w_0</math>, we want to “move” in the direction <math display="inline">\Delta w</math> where <math display="inline">L(w_0) - L(w_0 + \Delta w)</math> is minimized (suppose the magnitude of <math display="inline">\Delta w</math> is fixed). By Cauchy’s Inequality, this is precisely when <math display="inline">\Delta w</math> is in the direction of <math display="inline">-\nabla L(w_0)</math>.
So given some <math display="inline">w_n</math>, we want to update in the direction of <math display="inline">-\alpha \nabla L(w_n)</math>. This motivates setting <math display="inline">w_{n+1} = w_n - \alpha \nabla L(w_n)</math>, where <math display="inline">\alpha</math> is a scalar factor. We call <math display="inline">\alpha</math> the “learning rate” because it affects how fast the series <math display="inline">w_n</math> converges to the optimum. The main trouble in machine learning is to tweak the <math display="inline">\alpha</math> to what “works best” in ensuring convergence, and that is one of the considerations that the remaining algorithms try to address.
<span id="stochastic-gradient-descent"></span>
== Stochastic Gradient Descent ==
In practice we don’t actually know the “true gradient”. So instead we take some datasets, say datasets <math display="inline">1</math> through <math display="inline">n</math>, and for dataset <math display="inline">i</math> we derive an estimated gradient <math display="inline">\nabla L_i</math>. Then we may estimate <math display="inline">\nabla L</math> as
<math display="block">\frac{\nabla L_1 + \cdots + \nabla L_n}{n}.</math>
If it is easy to compute <math display="inline">\nabla L_i(w)</math> in general then we are golden: this is the best estimate of <math display="inline">L</math> we can get. But what if <math display="inline">\nabla L_i</math> are computationally expensive to compute? Then there is a tradeoff between variance and computational cost when evaluating our estimate of <math display="inline">\nabla L</math>.
A very low-cost (but low-accuracy) way to estimate <math display="inline">\nabla L</math> is just via <math display="inline">\nabla L_1</math> (or any other <math display="inline">\nabla L_i</math>). But this is obviously problematic: we aren’t even using most of our data! A better balance can be struck as follows: to evaluate <math display="inline">\nabla L(w_n)</math>, select <math display="inline">k</math> functions at random from <math display="inline">\{\nabla L_1, \ldots, \nabla L_n\}</math>. Then estimate <math display="inline">\nabla L</math> as the average of those <math display="inline">k</math> functions ''only at that step''.
<span id="riffs-on-stochastic-gradient-descent"></span>
== Riffs on stochastic gradient descent ==
<span id="momentum"></span>
=== Momentum ===
See also [https://distill.pub/2017/momentum/ “Momentum” on Distill].
In typical stochastic gradient descent, the next step we take is based solely on the gradient at the current point. This completely ignores the past gradients. However, many times it makes sense to take the past gradients into account. Of course, if we are at <math display="inline">w_{100}</math>, we should care about <math display="inline">\nabla L(w_{99})</math> much heavier than <math display="inline">\nabla L(w_1)</math>. So we should weight <math display="inline">\nabla L(w_{99})</math> much more than <math display="inline">\nabla L(w_1)</math>.
The simplest way is to weight it with a geometric approach. So when we iterate, instead of taking <math display="inline">w_{n+1}</math> to satisfy
<math display="block">w_{n+1} - w_n = -\alpha \nabla L(w_n)</math>
like in standard gradient descent, we instead want to take <math display="inline">w_{n+1}</math> to satisfy
<math display="block">w_{n+1} - w_n = -\alpha \nabla L(w_n) - \beta \alpha \nabla L(w_{n-1}) - \cdots - \beta^n \nabla L(w_0).</math>
But this raises a concern: are we really going to be storing all of these terms, especially as <math display="inline">n</math> grows? Fortunately, we do not need to. For we may notice that
<math display="block">w_{n+1} - w_n = -\alpha \nabla L(w_n) - \beta (\alpha \nabla L(w_{n-1}) - \cdots - \beta^{n-1} L(w_0)) = -\alpha \nabla L(w_n) - \beta(w_n - w_{n-1}).</math>
To put it another way, if we write <math display="inline">w_n - w_{n-1} = \Delta w_n</math>, i.e. how much <math display="inline">w_n</math> differs from <math display="inline">w_{n-1}</math> by, we may rewrite this equation as
<math display="block">\Delta w_{n+1} = -\alpha \nabla L(w_n) + \beta \Delta w_n.</math>
Some of the benefits of using a momentum based approach:
* most importantly, ''it can dramatically speed up convergence to a local minimum''.
* it makes convergence more likely in general
* escaping local minima/saddles/plateaus (its importance is possibly contested? See [https://www.reddit.com/r/MachineLearning/comments/dqbp9g/d_momentum_methods_helps_to_escape_local_minima/ this reddit thread])
<span id="rmsprop"></span>
=== RMSProp ===
Gradient descent also often has diminishing learning rates. In order to counter this, we very broadly want to - track the past learning rates, - and if they have been low, multiply <math display="inline">\Delta w_{n+1}</math> by a scalar to increase the learning rate. - (As a side effect, if our past learning rates are quite high, we will tamper the learning rates.)
While performing our gradient descent to get <math display="inline">w_n \to w_{n+1}</math>, we create and store an auxillary parameter <math display="inline">v_{n+1}</math> as follows:
<math display="block">v_{n+1} = \beta v_n + (1 - \beta) \nabla L(w)^2</math>
and define
<math display="block">w_{n+1} = w_n - \frac{\alpha}{\sqrt{v_n} + \epsilon} L(w),</math>
where <math display="inline">\alpha</math> as usual is the learning rate, <math display="inline">\beta</math> is the decay rate of <math display="inline">v_n</math>, and <math display="inline">\epsilon</math> is a constant that also needs to be fine-tuned.
We include the constant term of <math display="inline">\epsilon</math> in order to ensure that the sequence <math display="inline">w_n</math> actually converges and to ensure numerical stability. If we are near the minimum, then <math display="inline">v_n</math> will be quite small, meaning the denominator <math display="inline">\sqrt{v_n} + \epsilon</math> will essentially just become <math display="inline">\sqrt{v_n}</math>. But because <math display="inline">w</math> will converge when <math display="inline">L(w)</math> is just multiplied by a constant (this is the underlying assumption of standard gradient descent, after all), we will achieve convergence when near a minimum.
Side note: in order to get RMSProp to interoperate with stochastic gradient descent, we instead compute the sequence <math display="inline">v_n</math> for each approximated loss function <math display="inline">L_i</math>.
<span id="adam"></span>
=== Adam ===
Adam ('''Ada'''ptive '''M'''oment Estimation) is a gradient descent modification that combines Momentum and RMSProp. We create two auxillary variables while iterating <math display="inline">w_n</math> (where <math display="inline">\alpha</math> is the learning rate, <math display="inline">\beta_1</math> and <math display="inline">\beta_2</math> are decay parameters that need to be fine-tuned, and <math display="inline">\epsilon</math> is a parameter serving the same purpose as in RMSProp):
<math display="block">m_{n+1} = \beta_1 m_n + (1 - \beta_1) \nabla L(w_n)</math>
<math display="block">v_{n+1} = \beta_2 v_n + (1 - \beta_2) \nabla L(w_n)^2.</math>
For notational convenience, we will define
<math display="block">\widehat{m}_n = \frac{m_n}{1 - \beta_1^n}</math>
<math display="block">\widehat{v}_n = \frac{v_n}{1 - \beta_2^n}.</math>
Then our update function to get <math display="inline">w_{n+1}</math> is
<math display="block">w_{n+1} = w_n - \alpha \frac{\widehat{m}_n}{\sqrt{\widehat{v}_w} + \epsilon}.</math>
It is worth noting that though this formula does not explicitly include <math display="inline">\nabla L(w_n)</math>, it is accounted for in the <math display="inline">\widehat{m}_n</math> term through <math display="inline">m_n</math>.
b67258a47979d4df1cb09c4b7e947c5b973bd6b7
Main Page
0
1
1286
1121
2024-05-26T19:03:51Z
Vrtnis
21
/* Added iCub*/
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Haier]]
|
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
134202c3758f70881bbe1644662c87d64d5a39f3
Instituto Italiano
0
285
1287
2024-05-26T19:09:36Z
Vrtnis
21
Created page with "== Instituto Italiano di Tecnologia (IIT) == The Instituto Italiano di Tecnologia (IIT) is a leading research institution in Italy, dedicated to advancing technology and inno..."
wikitext
text/x-wiki
== Instituto Italiano di Tecnologia (IIT) ==
The Instituto Italiano di Tecnologia (IIT) is a leading research institution in Italy, dedicated to advancing technology and innovation. It plays a pivotal role in the development of cutting-edge robotics and other technological advancements.
== iCub Crew ==
The [[iCub]] crew consists of international scientists and engineers with a passion for humanoid robotics. They have delivered one of the most remarkable robotics projects, driven by their dedication and enthusiasm.
=== Early Developments ===
Initially, the team designed the mechanics, electronics, firmware, and software infrastructure of the iCub. They subsequently implemented the first examples of motor control, including inverse kinematics and force control, as well as vision libraries, simple trackers, object recognition, attention systems, and central pattern generators (CPGs) for crawling. They also applied learning methods to dynamics.
Recently, the IIT team has consolidated their research along several key lines, including whole-body control (e.g., locomotion), vision and touch (both traditional and neuromorphic), grasping and manipulation, and speech and human-robot interaction. Their goal is to enhance iCub's interaction skills.
=== Development of R1 ===
The same team that developed the iCub has also created the new R1 robot. The R1 uses almost the same electronics and software as the iCub but features brand-new mechanics.
c1c4ad31a19fc3ddc7e9e22890a0e3176c79da9b
ICub
0
286
1288
2024-05-26T19:12:12Z
Vrtnis
21
Created page with "The iCub is a research-grade humanoid robot for developing and testing embodied AI algorithms. The iCub Project integrates results from various [[Instituto Italiano]] Research..."
wikitext
text/x-wiki
The iCub is a research-grade humanoid robot for developing and testing embodied AI algorithms. The iCub Project integrates results from various [[Instituto Italiano]] Research Units.
The iCub Project is a key initiative for IIT, aiming to transfer robotics technologies to industrial applications.
4b9deaacf12f239540e584310c731511801917c2
1289
1288
2024-05-26T19:15:48Z
Vrtnis
21
wikitext
text/x-wiki
The iCub is a research-grade humanoid robot for developing and testing embodied AI algorithms. The iCub Project integrates results from various [[Instituto Italiano]] Research Units. The iCub Project is a key initiative for IIT, aiming to transfer robotics technologies to industrial applications.
{{infobox robot
| name = iCub
| organization = [[Instituto Italiano]]
| height = 104 cm (3 ft 5 in)
| weight = 22 kg (48.5 lbs)
| video_link = https://www.youtube.com/watch?v=znF1-S9JmzI
| cost = Approximately €250,000
}}
Introduced by IIT, iCub is designed to understand and respond to human interaction. It is utilized extensively in research settings to explore motor control, vision, touch, grasping, manipulation, speech, and human-robot interaction.
== References ==
[https://www.iit.it/research/lines/icub IIT official website on iCub]
[https://www.youtube.com/watch?v=znF1-S9JmzI Presentation of iCub by IIT]
[[Category:Robots]]
[[Category:Humanoid Robots]]
[[Category:Instituto Italiano di Tecnologia]]
37f2278b6cb886df35886634b60449293da05d03
1290
1289
2024-05-26T19:18:37Z
Vrtnis
21
wikitext
text/x-wiki
The iCub is a research-grade humanoid robot for developing and testing embodied AI algorithms. The iCub Project integrates results from various [[Instituto Italiano]] Research Units. The iCub Project is a key initiative for IIT, aiming to transfer robotics technologies to industrial applications.
{{infobox robot
| name = iCub
| organization = [[Instituto Italiano]]
| height = 104 cm (3 ft 5 in)
| weight = 22 kg (48.5 lbs)
| video_link = https://www.youtube.com/watch?v=ErgfgF0uwUo
| cost = Approximately €250,000
}}
Introduced by IIT, iCub is designed to understand and respond to human interaction. It is utilized extensively in research settings to explore motor control, vision, touch, grasping, manipulation, speech, and human-robot interaction.
== References ==
[https://www.iit.it/research/lines/icub IIT official website on iCub]
[https://www.youtube.com/watch?v=znF1-S9JmzI Presentation of iCub by IIT]
[[Category:Robots]]
[[Category:Humanoid Robots]]
[[Category:Instituto Italiano di Tecnologia]]
87909d725ea741840b79783181f1eaaacdac8a39
1291
1290
2024-05-26T19:24:51Z
Vrtnis
21
/*Add general specs*/
wikitext
text/x-wiki
The iCub is a research-grade humanoid robot for developing and testing embodied AI algorithms. The iCub Project integrates results from various [[Instituto Italiano]] Research Units. The iCub Project is a key initiative for IIT, aiming to transfer robotics technologies to industrial applications.
{{infobox robot
| name = iCub
| organization = [[Instituto Italiano]]
| height = 104 cm (3 ft 5 in)
| weight = 22 kg (48.5 lbs)
| video_link = https://www.youtube.com/watch?v=ErgfgF0uwUo
| cost = Approximately €250,000
}}
== General Specifications ==
The number of degrees of freedom is as follows:
{| class="wikitable"
! Component !! # of degrees of freedom !! Notes
|-
| Eyes || 3 || Independent vergence and common tilt
|-
| Head || 3 || The neck has three degrees of freedom to tilt, swing, and pan
|-
| Chest || 3 || The torso can also tilt, swing, and pan
|-
| Arms || 7 (each) || The shoulder has 3 DoF, 1 in the elbow, and three in the wrist
|-
| Hands || 9 || The hand has 19 joints coupled in various combinations: the thumb, index, and middle finger are independent (coupled distal phalanxes), the ring and little finger are coupled. The thumb can additionally rotate over the palm.
|-
| Legs || 6 (each) || 6 DoF are sufficient to walk.
|}
== References ==
[https://www.iit.it/research/lines/icub IIT official website on iCub]
[https://www.youtube.com/watch?v=znF1-S9JmzI Presentation of iCub by IIT]
[[Category:Robots]]
[[Category:Humanoid Robots]]
[[Category:Instituto Italiano di Tecnologia]]
5ef89adec92b5fd70c5c64e8eea4c94a984e77b4
1295
1291
2024-05-26T19:31:56Z
Vrtnis
21
wikitext
text/x-wiki
The iCub is a research-grade humanoid robot for developing and testing embodied AI algorithms. The iCub Project integrates results from various [[Instituto Italiano]] Research Units. The iCub Project is a key initiative for IIT, aiming to transfer robotics technologies to industrial applications.
{{infobox robot
| name = iCub
| organization = [[Instituto Italiano]]
| height = 104 cm (3 ft 5 in)
| weight = 22 kg (48.5 lbs)
| video_link = https://www.youtube.com/watch?v=ErgfgF0uwUo
| cost = Approximately €250,000
}}
== General Specifications ==
The number of degrees of freedom is as follows:
{| class="wikitable"
! Component !! # of degrees of freedom !! Notes
|-
| Eyes || 3 || Independent vergence and common tilt
|-
| Head || 3 || The neck has three degrees of freedom to tilt, swing, and pan
|-
| Chest || 3 || The torso can also tilt, swing, and pan
|-
| Arms || 7 (each) || The shoulder has 3 DoF, 1 in the elbow, and three in the wrist
|-
| Hands || 9 || The hand has 19 joints coupled in various combinations: the thumb, index, and middle finger are independent (coupled distal phalanxes), the ring and little finger are coupled. The thumb can additionally rotate over the palm.
|-
| Legs || 6 (each) || 6 DoF are sufficient to walk.
|}
{| class="wikitable"
! Sensor type !! Number !! Notes
|-
| Cameras || 2 || Mounted in the eyes (see above), Pointgrey Dragonfly 2 (640x480)
|-
| Microphones || 2 || SoundMan High quality Stereo Omnidirectional microphone, -46 dB, 10V, 20....20 000 Hz +/- 3dB
|-
| Inertial sensors || 3+3 || Three axis gyroscopes + three axis accelerometers + three axis geomagnetic sensor based on BOSCH BNO055 chip, mounted in the head. (100Hz)
|-
| Joint sensors || For each large joint || Absolute magnetic encoder (12bit resolution @1kHz) at the joint, high-resolution incremental encoder at the motor side, hall-effect sensors for commutation (brushless motors only)
|}
== References ==
[https://www.iit.it/research/lines/icub IIT official website on iCub]
[https://www.youtube.com/watch?v=znF1-S9JmzI Presentation of iCub by IIT]
[[Category:Robots]]
[[Category:Humanoid Robots]]
[[Category:Instituto Italiano di Tecnologia]]
6ec581afbc756367f85afea72b0676239d937896
1296
1295
2024-05-26T19:33:42Z
Vrtnis
21
/* General Specifications */
wikitext
text/x-wiki
The iCub is a research-grade humanoid robot for developing and testing embodied AI algorithms. The iCub Project integrates results from various [[Instituto Italiano]] Research Units. The iCub Project is a key initiative for IIT, aiming to transfer robotics technologies to industrial applications.
{{infobox robot
| name = iCub
| organization = [[Instituto Italiano]]
| height = 104 cm (3 ft 5 in)
| weight = 22 kg (48.5 lbs)
| video_link = https://www.youtube.com/watch?v=ErgfgF0uwUo
| cost = Approximately €250,000
}}
== General Specifications ==
The number of degrees of freedom is as follows:
{| class="wikitable"
! Component !! # of degrees of freedom !! Notes
|-
| Eyes || 3 || Independent vergence and common tilt
|-
| Head || 3 || The neck has three degrees of freedom to tilt, swing, and pan
|-
| Chest || 3 || The torso can also tilt, swing, and pan
|-
| Arms || 7 (each) || The shoulder has 3 DoF, 1 in the elbow, and three in the wrist
|-
| Hands || 9 || The hand has 19 joints coupled in various combinations: the thumb, index, and middle finger are independent (coupled distal phalanxes), the ring and little finger are coupled. The thumb can additionally rotate over the palm.
|-
| Legs || 6 (each) || 6 DoF are sufficient to walk.
|}
== Sensors ==
{| class="wikitable"
! Sensor type !! Number !! Notes
|-
| Cameras || 2 || Mounted in the eyes (see above), Pointgrey Dragonfly 2 (640x480)
|-
| Microphones || 2 || SoundMan High quality Stereo Omnidirectional microphone, -46 dB, 10V, 20....20 000 Hz +/- 3dB
|-
| Inertial sensors || 3+3 || Three axis gyroscopes + three axis accelerometers + three axis geomagnetic sensor based on BOSCH BNO055 chip, mounted in the head. (100Hz)
|-
| Joint sensors || For each large joint || Absolute magnetic encoder (12bit resolution @1kHz) at the joint, high-resolution incremental encoder at the motor side, hall-effect sensors for commutation (brushless motors only)
|-
| Joint sensors || For each small joint || Absolute magnetic encoder (except the fingers which use a custom hall-effect sensor), medium-resolution incremental encoder at the motor
|-
| Force/torque sensors || 6 || 6x6-axis force/torque sensors are mounted on the upper part of the arm and legs plus 2 additional sensors mounted closer to the ankle for higher precision ZMP estimation (100Hz)
|-
| Tactile sensors || More than 3000 (*) || Capacitive tactile sensors (8 bit resolution at 40Hz) are installed in the fingertips, palms, upper and fore-arms, chest and optionally at the legs (*).
|}
== References ==
[https://www.iit.it/research/lines/icub IIT official website on iCub]
[https://www.youtube.com/watch?v=znF1-S9JmzI Presentation of iCub by IIT]
[[Category:Robots]]
[[Category:Humanoid Robots]]
[[Category:Instituto Italiano di Tecnologia]]
e661198d4623b122cc6f140e2b93da85f7063264
1297
1296
2024-05-26T19:37:12Z
Vrtnis
21
wikitext
text/x-wiki
The iCub is a research-grade humanoid robot for developing and testing embodied AI algorithms. The iCub Project integrates results from various [[Instituto Italiano]] Research Units. The iCub Project is a key initiative for IIT, aiming to transfer robotics technologies to industrial applications.
{{infobox robot
| name = iCub
| organization = [[Instituto Italiano]]
| height = 104 cm (3 ft 5 in)
| weight = 22 kg (48.5 lbs)
| video_link = https://www.youtube.com/watch?v=ErgfgF0uwUo
| cost = Approximately €250,000
}}
== General Specifications ==
The number of degrees of freedom is as follows:
{| class="wikitable"
! Component !! # of degrees of freedom !! Notes
|-
| Eyes || 3 || Independent vergence and common tilt
|-
| Head || 3 || The neck has three degrees of freedom to tilt, swing, and pan
|-
| Chest || 3 || The torso can also tilt, swing, and pan
|-
| Arms || 7 (each) || The shoulder has 3 DoF, 1 in the elbow, and three in the wrist
|-
| Hands || 9 || The hand has 19 joints coupled in various combinations: the thumb, index, and middle finger are independent (coupled distal phalanxes), the ring and little finger are coupled. The thumb can additionally rotate over the palm.
|-
| Legs || 6 (each) || 6 DoF are sufficient to walk.
|}
== Sensors ==
{| class="wikitable"
! Sensor type !! Number !! Notes
|-
| Cameras || 2 || Mounted in the eyes (see above), Pointgrey Dragonfly 2 (640x480)
|-
| Microphones || 2 || SoundMan High quality Stereo Omnidirectional microphone, -46 dB, 10V, 20....20 000 Hz +/- 3dB
|-
| Inertial sensors || 3+3 || Three axis gyroscopes + three axis accelerometers + three axis geomagnetic sensor based on BOSCH BNO055 chip, mounted in the head. (100Hz)
|-
| Joint sensors || For each large joint || Absolute magnetic encoder (12bit resolution @1kHz) at the joint, high-resolution incremental encoder at the motor side, hall-effect sensors for commutation (brushless motors only)
|-
| Joint sensors || For each small joint || Absolute magnetic encoder (except the fingers which use a custom hall-effect sensor), medium-resolution incremental encoder at the motor
|-
| Force/torque sensors || 6 || 6x6-axis force/torque sensors are mounted on the upper part of the arm and legs plus 2 additional sensors mounted closer to the ankle for higher precision ZMP estimation (100Hz)
|-
| Tactile sensors || More than 3000 (*) || Capacitive tactile sensors (8 bit resolution at 40Hz) are installed in the fingertips, palms, upper and fore-arms, chest and optionally at the legs (*).
|}
{| class="wikitable"
|+ Capabilities of iCub
! Task !! Description
|-
| Crawling || Using visual guidance with an optic marker on the floor
|-
| Solving complex 3D mazes || Demonstrated ability to navigate and solve intricate 3D mazes
|-
| Archery || Shooting arrows with a bow and learning to hit the center of the target
|-
| Facial expressions || Capable of expressing emotions through facial expressions
|-
| Force control || Utilizing proximal force/torque sensors for precise force control
|-
| Grasping small objects || Able to grasp and manipulate small objects such as balls and plastic bottles
|-
| Collision avoidance || Avoids collisions within non-static environments and can also avoid self-collision
|}
== References ==
[https://www.iit.it/research/lines/icub IIT official website on iCub]
[https://www.youtube.com/watch?v=znF1-S9JmzI Presentation of iCub by IIT]
[[Category:Robots]]
[[Category:Humanoid Robots]]
[[Category:Instituto Italiano di Tecnologia]]
b3ba4f09cce832e6b029031716b79e15b76bc71a
Allen's PPO Notes
0
287
1292
2024-05-26T19:27:49Z
Allen12
15
Created page with "Intuition: Want to avoid too large of a policy update #Smaller policy updates more likely to converge to optimal #Falling "off the cliff" might mean it's impossible to recover..."
wikitext
text/x-wiki
Intuition: Want to avoid too large of a policy update
#Smaller policy updates more likely to converge to optimal
#Falling "off the cliff" might mean it's impossible to recover
How we solve this: Measure how much policy changes w.r.t. previous, clip ratio to <math>[1-\varepislon, 1 + \varepsilon] removing incentive to go too far.
d94ebb613aa8743e2846cc2c86c3abace72d6f90
1293
1292
2024-05-26T19:28:00Z
Allen12
15
wikitext
text/x-wiki
Intuition: Want to avoid too large of a policy update
#Smaller policy updates more likely to converge to optimal
#Falling "off the cliff" might mean it's impossible to recover
How we solve this: Measure how much policy changes w.r.t. previous, clip ratio to <math>[1-\varepislon, 1 + \varepsilon]</math> removing incentive to go too far.
889efc6a6834ab45f1b460fbf7ba95da3c2e435e
1294
1293
2024-05-26T19:28:16Z
Allen12
15
wikitext
text/x-wiki
Intuition: Want to avoid too large of a policy update
#Smaller policy updates more likely to converge to optimal
#Falling "off the cliff" might mean it's impossible to recover
How we solve this: Measure how much policy changes w.r.t. previous, clip ratio to <math>[1-\varepsilon, 1 + \varepsilon]</math> removing incentive to go too far.
4f2706d1bb897d6d4a31536371b0b4e58b81cfa4
1298
1294
2024-05-26T19:38:10Z
Allen12
15
wikitext
text/x-wiki
=== Advantage Function ===
<math> A(s, a) = Q(s, a) - V(s) </math>. Intuitively: extra reward we get if we take action at state compared to the mean reward at that state. We use this advantage function to tell us how good the action is - if its positive, the action is better than others at that state so we want to move in that direction, and if its negative, the action is worse than others at thtat state so we move in the opposite direction.
=== Motivation ===
Intuition: Want to avoid too large of a policy update
#Smaller policy updates more likely to converge to optimal
#Falling "off the cliff" might mean it's impossible to recover
How we solve this: Measure how much policy changes w.r.t. previous, clip ratio to <math>[1-\varepsilon, 1 + \varepsilon]</math> removing incentive to go too far.
f179bef232fe526d26277cb4b018cefc7d4342ab
1299
1298
2024-05-26T19:47:23Z
Allen12
15
wikitext
text/x-wiki
=== Advantage Function ===
<math> A(s, a) = Q(s, a) - V(s) </math>. Intuitively: extra reward we get if we take action at state compared to the mean reward at that state. We use this advantage function to tell us how good the action is - if its positive, the action is better than others at that state so we want to move in that direction, and if its negative, the action is worse than others at thtat state so we move in the opposite direction.
=== Motivation ===
Intuition: Want to avoid too large of a policy update
#Smaller policy updates more likely to converge to optimal
#Falling "off the cliff" might mean it's impossible to recover
How we solve this: Measure how much policy changes w.r.t. previous, clip ratio to <math>[1-\varepsilon, 1 + \varepsilon]</math> removing incentive to go too far.
=== Ratio Function ===
Intuitively, if we want to measure the divergence between our old and current policies, we want some way of figuring out the difference between action-state pairs in the old and new policies. We denote this as <math> r_t(\theta) = \frac{\pi_\theta(a_t | s_t)}{\pi_{\theta_{old}}(a_t|s_t)}. A ratio greater than one indicates the action is more likely in the current policy than the old policy, and if its between 0 and 1, it indicates the opposite.
b165cc3a28ddbe1d58e6f7900e1026a98e1626ab
1300
1299
2024-05-26T20:17:22Z
Allen12
15
wikitext
text/x-wiki
=== Links ===
Hugging face deep rl course
=== Advantage Function ===
<math> A(s, a) = Q(s, a) - V(s) </math>. Intuitively: extra reward we get if we take action at state compared to the mean reward at that state. We use this advantage function to tell us how good the action is - if its positive, the action is better than others at that state so we want to move in that direction, and if its negative, the action is worse than others at thtat state so we move in the opposite direction. Since it's often difficult and expensive to compute the Q value for all state-action pairs, we replace Q(s, a) with our sampled reward from the action. We can improve policy gradients using this objective function instead of the reward for stability.
=== Motivation ===
Intuition: Want to avoid too large of a policy update
#Smaller policy updates more likely to converge to optimal
#Falling "off the cliff" might mean it's impossible to recover
How we solve this: Measure how much policy changes w.r.t. previous, clip ratio to <math>[1-\varepsilon, 1 + \varepsilon]</math> removing incentive to go too far.
=== Ratio Function ===
Intuitively, if we want to measure the divergence between our old and current policies, we want some way of figuring out the difference between action-state pairs in the old and new policies. We denote this as <math> r_t(\theta) = \frac{\pi_\theta(a_t | s_t)}{\pi_{\theta_{old}}(a_t|s_t)} </math>. A ratio greater than one indicates the action is more likely in the current policy than the old policy, and if its between 0 and 1, it indicates the opposite. This ratio function replaces the log probability in the policy objective function as the way of accounting for the change in parameters.
Let's step back for a moment and think about why we might want to do this. In standard policy gradients, after we use a trajectory to update our policy, the experience gained in that trajectory is now incorrect with respect to our current policy. We resolve this using importance sampling. If the actions of the old trajectory have become unlikely, the influence of that experience will be reduced. Thus, prior to clipping, our new loss function can be written in expectation form as <math> E \left[r_t(\theta)A_t\right].
=== Clipping ===
It's easier to understand this clipping when we break it down based on why we are clipping. Let's consider some possible cases:
1a43b8c46f2b6abf482d6e0dd44ed9834f3776c6
1302
1300
2024-05-27T01:28:27Z
Allen12
15
wikitext
text/x-wiki
=== Links ===
Hugging face deep rl course
=== Advantage Function ===
<math> A(s, a) = Q(s, a) - V(s) </math>. Intuitively: extra reward we get if we take action at state compared to the mean reward at that state. We use this advantage function to tell us how good the action is - if its positive, the action is better than others at that state so we want to move in that direction, and if its negative, the action is worse than others at thtat state so we move in the opposite direction. Since it's often difficult and expensive to compute the Q value for all state-action pairs, we replace Q(s, a) with our sampled reward from the action. We can improve policy gradients using this objective function instead of the reward for stability.
=== Motivation ===
Intuition: Want to avoid too large of a policy update
#Smaller policy updates more likely to converge to optimal
#Falling "off the cliff" might mean it's impossible to recover
How we solve this: Measure how much policy changes w.r.t. previous, clip ratio to <math>[1-\varepsilon, 1 + \varepsilon]</math> removing incentive to go too far.
=== Ratio Function ===
Intuitively, if we want to measure the divergence between our old and current policies, we want some way of figuring out the difference between action-state pairs in the old and new policies. We denote this as <math> r_t(\theta) = \frac{\pi_\theta(a_t | s_t)}{\pi_{\theta_{old}}(a_t|s_t)} </math>. A ratio greater than one indicates the action is more likely in the current policy than the old policy, and if its between 0 and 1, it indicates the opposite. This ratio function replaces the log probability in the policy objective function as the way of accounting for the change in parameters.
Let's step back for a moment and think about why we might want to do this. In standard policy gradients, after we use a trajectory to update our policy, the experience gained in that trajectory is now incorrect with respect to our current policy. We resolve this using importance sampling. If the actions of the old trajectory have become unlikely, the influence of that experience will be reduced. Thus, prior to clipping, our new loss function can be written in expectation form as <math> E \left[r_t(\theta)A_t\right] </math>. If we take the gradient, it actually ends up being a nearly identical equation, only with the <math> \pi_\theta(a_t | s_t) </math> being scaled by a proportional factor <math> \pi_{\theta_{old}}(a_t | s_t) </math>.
=== Clipping ===
Our clipped objective function is <math> E_t
It's easier to understand this clipping when we break it down based on why we are clipping. Let's consider some possible cases:
# The ratio is in the range. If the ratio is in the range, we have no reason to clip - if advantage is positive, we should encourage our policy to increase the probability of that action, and if negative, we should decrease the probability that the policy takes the action.
# The ratio is lower than <math> 1 - \epsilon </math>. If the advantage is positive, we still want to increase the probability of taking that action. If the advantage is negative, then doing a policy update will decrease further the probability of taking that action, so we instead clip the gradient to 0 and don't update our weights - even though the reward here was worse, we still want to explore.
# The ratio is greater than <math> 1 + \epsilon </math>. If the advantage is positive, we already have a higher probability of taking the action than in the previous policy. Thus, we don't want to update further, and get to greedy. If the advantage is negative, we clip it to <math> 1 - \epsilon </math> as usual.
352b61673581608a4876b04ac2908555ef7d3697
1303
1302
2024-05-27T01:33:43Z
Allen12
15
wikitext
text/x-wiki
=== Links ===
Hugging face deep rl course
=== Advantage Function ===
<math> A(s, a) = Q(s, a) - V(s) </math>. Intuitively: extra reward we get if we take action at state compared to the mean reward at that state. We use this advantage function to tell us how good the action is - if its positive, the action is better than others at that state so we want to move in that direction, and if its negative, the action is worse than others at thtat state so we move in the opposite direction. Since it's often difficult and expensive to compute the Q value for all state-action pairs, we replace Q(s, a) with our sampled reward from the action. We can improve policy gradients using this objective function instead of the reward for stability.
=== Motivation ===
Intuition: Want to avoid too large of a policy update
#Smaller policy updates more likely to converge to optimal
#Falling "off the cliff" might mean it's impossible to recover
How we solve this: Measure how much policy changes w.r.t. previous, clip ratio to <math>[1-\varepsilon, 1 + \varepsilon]</math> removing incentive to go too far.
=== Ratio Function ===
Intuitively, if we want to measure the divergence between our old and current policies, we want some way of figuring out the difference between action-state pairs in the old and new policies. We denote this as <math> r_t(\theta) = \frac{\pi_\theta(a_t | s_t)}{\pi_{\theta_{old}}(a_t|s_t)} </math>. A ratio greater than one indicates the action is more likely in the current policy than the old policy, and if its between 0 and 1, it indicates the opposite. This ratio function replaces the log probability in the policy objective function as the way of accounting for the change in parameters.
Let's step back for a moment and think about why we might want to do this. In standard policy gradients, after we use a trajectory to update our policy, the experience gained in that trajectory is now incorrect with respect to our current policy. We resolve this using importance sampling. If the actions of the old trajectory have become unlikely, the influence of that experience will be reduced. Thus, prior to clipping, our new loss function can be written in expectation form as <math> E \left[r_t(\theta)A_t\right] </math>. If we take the gradient, it actually ends up being a nearly identical equation, only with the <math> \pi_\theta(a_t | s_t) </math> being scaled by a proportional factor <math> \pi_{\theta_{old}}(a_t | s_t) </math>.
=== Clipping ===
Our clipped objective function is <math> E_t </math>
It's easier to understand this clipping when we break it down based on why we are clipping. Let's consider some possible cases:
# The ratio is in the range. If the ratio is in the range, we have no reason to clip - if advantage is positive, we should encourage our policy to increase the probability of that action, and if negative, we should decrease the probability that the policy takes the action.
# The ratio is lower than <math> 1 - \epsilon </math>. If the advantage is positive, we still want to increase the probability of taking that action. If the advantage is negative, then doing a policy update will decrease further the probability of taking that action, so we instead clip the gradient to 0 and don't update our weights - even though the reward here was worse, we still want to explore.
# The ratio is greater than <math> 1 + \epsilon </math>. If the advantage is positive, we already have a higher probability of taking the action than in the previous policy. Thus, we don't want to update further, and get to greedy. If the advantage is negative, we clip it to <math> 1 - \epsilon </math> as usual.
553320fbafa2af7d2922e9e43b949aa52d253932
K-Scale CANdaddy
0
235
1301
1025
2024-05-27T01:13:29Z
Vedant
24
wikitext
text/x-wiki
=== MCU ===
* STM32
** Programming mode button (boot / reset)
=== USB ===
* adsfasdfasdf
=== Buttons ===
* Button debouncer
** TODO: Add the IC name
=== LCD ===
* How do we connect I2C chip to the LCD screen?
=== CAN ===
* How to get supply-side VCC without having it on the bus (from supply-side GND)?
* What do we do if supply-side GND is missing?
=== Voltage ===
* Load sharing between USB and battery
=== Questions ===
# How does the switching regulator mechanism work?
# Do digital logic levels depend on source voltage? For example, let's say that a 3.3V powered device is sending signals to a 5V power device. From my understanding of the datasheet, a direct connection between the signals should not be allowed. However, why is this? What are the internal mechanisms of the pins that make it so that the logic levels need to be based off of the source voltage?
# I understand that there is a way to both use the battery as a source as well as to charge it using the USB. However, wouldn't this mean not being able to put a diode within the circuit directing current in one direction? Isn't this a hazard, destroying the circuit the moment a bit of noise comes through?
# Power Ground in the context of a switching regulator for our circuit to amplify 3.7 V battery to 5 V source. Do we just connect Power Ground to Battery Ground?
=== Notes ===
* Need to figure out the interface for the SPI lines using MCP2515
* Pros if successful:
* Reduces amount of wiring for STM32, requires just one SPI bus with a bunch of easily programmable chip selects
* Cons: Need to understand the interface for the MCP2515. Need to confirm how interrupts and timing works.
0f5ecaa1254dcdf0e71838ab45fb2009d0491f57
1304
1301
2024-05-27T09:02:27Z
Ben
2
/* Notes */
wikitext
text/x-wiki
=== MCU ===
* STM32
** Programming mode button (boot / reset)
=== USB ===
* adsfasdfasdf
=== Buttons ===
* Button debouncer
** TODO: Add the IC name
=== LCD ===
* How do we connect I2C chip to the LCD screen?
=== CAN ===
* How to get supply-side VCC without having it on the bus (from supply-side GND)?
* What do we do if supply-side GND is missing?
=== Voltage ===
* Load sharing between USB and battery
=== Questions ===
# How does the switching regulator mechanism work?
# Do digital logic levels depend on source voltage? For example, let's say that a 3.3V powered device is sending signals to a 5V power device. From my understanding of the datasheet, a direct connection between the signals should not be allowed. However, why is this? What are the internal mechanisms of the pins that make it so that the logic levels need to be based off of the source voltage?
# I understand that there is a way to both use the battery as a source as well as to charge it using the USB. However, wouldn't this mean not being able to put a diode within the circuit directing current in one direction? Isn't this a hazard, destroying the circuit the moment a bit of noise comes through?
# Power Ground in the context of a switching regulator for our circuit to amplify 3.7 V battery to 5 V source. Do we just connect Power Ground to Battery Ground?
=== Notes ===
* Need to figure out the interface for the SPI lines using MCP2515
** Pros if successful:
*** Reduces amount of wiring for STM32, requires just one SPI bus with a bunch of easily programmable chip selects
** Cons: Need to understand the interface for the MCP2515. Need to confirm how interrupts and timing works.
4f4a89052402d0ea3f5aeaa7ccc9909fd64fc1d4
Controller Area Network (CAN)
0
155
1305
985
2024-05-27T18:13:54Z
Dymaxion
22
/* MCP2515 Driver */
wikitext
text/x-wiki
The '''Controller Area Network''' (CAN) represents an essential vehicle bus standard. It was explicitly engineered to facilitate communication between microcontrollers and devices without resorting to a central computer. The primary purpose of CAN is to enhance vehicle interconnectivity and foster swift data exchange between various systems within the vehicle.
== MCP2515 ==
The '''MCP2515''' is an integrated circuit produced by Microchip Technology, constructed to function as a stand-alone CAN controller. It exhibits compatibility with an SPI (Serial Peripheral Interface) which makes it versatile in various applications. Although its primary use is in automotive industries, it is also used in a variety of other control applications.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515 bridges the connection between the CAN protocol and the SPI protocol by receiving CAN messages and translating them into SPI data, allowing the microcontroller to interpret the information. Similarly, it transforms SPI data into CAN messages for transmission.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
The MCP2515's operational details include its capacity to support CAN 2.0A and B, drawing attention to its alignment with established CAN standards and equipping it for both basic and extended frame format usage.<ref>MCP2515 Stand-Alone CAN Controller with SPI Interface. Microchip Technology. [https://www.microchip.com/wwwproducts/en/en010406 Official datasheet].</ref>
== Applications ==
The [https://python-can.readthedocs.io/en/stable/ python-can] library provides Controller Area Network support for Python, providing common abstractions to different hardware devices, and a suite of utilities for sending and receiving messages on a CAN bus.
There is an example of an installation on [[Jetson_Orin]] here.
=== Pi4 ===
Raspberry Pi offers an easy to deploy 2-Channel Isolated CAN Bus Expansion HAT which allows to quickly integrate it to the peripheral devices. See the [https://www.waveshare.com/wiki/2-CH_CAN_HAT tutorial] for more information
=== Arduino ===
Arduino has a good support of the MCPs with many implementations of the [https://github.com/Seeed-Studio/Seeed_Arduino_CAN drivers]
=== MCP2515 Driver ===
By default the CAN bus node is supposed to acknowledge every message on the bus whether or not that node is interested in the message. However, the interference on the network can drop some bits during the communication. In the standard mode, the node would not only continuously try to re-send the unacknowledged messages, but also after a short period it would start sending error frames and then eventually go to bus-off mode and stop. This causes sever issues when the CAN network works with multiple motors.
The controller has a [http://ww1.microchip.com/downloads/en/DeviceDoc/MCP2515-Stand-Alone-CAN-Controller-with-SPI-20001801J.pdf one-shot] setup that requires changes in the driver.
=== Wiring ===
Here is some suggested equipment for wiring a CAN bus:
* Molex Mini-Fit Junior connectors
** [https://www.digikey.ca/en/products/detail/molex/0638190901/9655931 Crimper]
** [https://www.digikey.ca/en/products/detail/molex/0766500013/2115996 Connector kit]
** [https://www.aliexpress.us/item/3256805730106963.html Extraction tool]
* CAN bus cable
** [https://www.digikey.ca/en/products/detail/igus/CF211-02-01-02/18724291 Cable]
** [https://www.digikey.com/en/products/detail/igus/CF891-07-02/21280679 Alternative Cable]
* Heat shrink
** [https://www.amazon.com/Eventronic-Heat-Shrink-Tubing-Kit-3/dp/B0BVVMCY86 Tubing]
== References ==
<references />
[[Category:Communication]]
ffa771efe644014d3a1453d4524b54a96e99a471
File:Example folder.png
6
288
1306
2024-05-27T22:45:36Z
Kewang
11
wikitext
text/x-wiki
show the hierarchy of the folers
7e2b60613f419828d6d298d4c41475b8bc1f9e70
File:Add model1.png
6
289
1307
2024-05-27T22:58:26Z
Kewang
11
wikitext
text/x-wiki
model1
3bd6e748431d451c4dd72b968cf047aa75f6b493
File:Add model2.png
6
290
1308
2024-05-27T23:01:13Z
Kewang
11
wikitext
text/x-wiki
model2
a6a4646aa4d1de42279ca51938186ee134bb4403
File:Add model3.png
6
291
1309
2024-05-27T23:09:28Z
Kewang
11
wikitext
text/x-wiki
add_model3
059d8879a7db05c1d002aeb9a420d2f0f4855ea6
MuJoCo WASM
0
257
1310
1217
2024-05-27T23:09:58Z
Kewang
11
wikitext
text/x-wiki
== Install emscripten ==
First, you need to install emscripten, which is a compiler toolchain for WebAssembly.
=== Get the emsdk repo ===
<code>
git clone https://github.com/emscripten-core/emsdk.git
</code>
=== Enter that directory ===
<code>
cd emsdk
</code>
=== Download and install the latest SDK tools ===
<code>
./emsdk install latest
</code>
=== Make the "latest" SDK "active" ===
<code>
./emsdk activate latest
</code>
=== Activate PATH and other environment variables ===
<code>
source ./emsdk_env.sh
</code>
These variables are set for the current terminal. If you want to make it for all terminals, you can add them to any terminal profile. Here they are:
The environment variables:
<code>
EMSDK = < path to emsdk dir >
EM_CONFIG = ~/.emscripten
EMSDK_NODE = < path to emsdk dir >/node/12.9.1_64bit/bin/node
</code>
=== Now just try it! ===
<code>
emcc
</code>
== Build the mujoco_wasm Binary ==
First git clone
<code> https://github.com/zalo/mujoco_wasm </code>
Next, you'll build the MuJoCo WebAssembly binary.
<syntaxhighlight lang="bash">
mkdir build
cd build
emcmake cmake ..
make
</syntaxhighlight>
[[File:Carbon (1).png|800px|thumb|none|emcmake cmake ..]]
[[File:Carbon (2).png|400px|thumb|none|make]]
'''Tip:''' If you get an error with "undefined symbol: saveSetjmp/testSetjmp" at the build step, revert to:
<code>
./emsdk install 3.1.56 && ./emsdk activate 3.1.56 && source ./emsdk_env.sh
</code>
== Running in Browser ==
Run this in your mujoco folder to start a server.
<code>
python -m http.server 8000
</code>
Then navigate to:
<code>
http://localhost:8000/index.html
</code>
[[File:Wasm screenshot13-40-40.png|800px|thumb|none|MuJoCo running in browser]]
== Running in Cloud/Cluster and Viewing on Local Machine ==
Add extra parameter to your ssh command:
<code>
ssh -L 8000:127.0.0.1:8000 my_name@my_cluster_ip
</code>
Then you can open it on the browser on your local machine!
== Adding New Models ==
All the models are stored in the folder examples/scenes, as seen below:
[[File:Example folder.png|400px|thumb|none]]
You can add your own model XML and meshes here. For example, here we add the stompy folder.
After adding the model files, run
<code>
python generate_index.py
</code>
to update file indexes.
Then copy all the content in <code>index.json</code> to <code> mujocoUtils.js</code>, as shown below:
[[File:Add model1.png|400px|thumb|none]]
In the end, again at file <code> mujocoUtils.js</code>, add the name and scene file
[[File:Add model2.png|600px|thumb|none]]
Then reload again and you can see new models have been added:
[[File:Add model3.png|800px|thumb|none]]
1bf74441f94674f9fcb1230ebbc621707daf865c
K-Scale CANdaddy
0
235
1311
1304
2024-05-28T01:16:11Z
Vedant
24
wikitext
text/x-wiki
=== MCU ===
* STM32
** Programming mode button (boot / reset)
=== USB ===
* adsfasdfasdf
=== Buttons ===
* Button debouncer
** TODO: Add the IC name
=== LCD ===
== PCF2518 ==
* How do we connect I2C chip to the LCD screen?
=== Battery Charger ===
=== MCP 2515 ===
=== CAN ===
* How to get supply-side VCC without having it on the bus (from supply-side GND)?
* What do we do if supply-side GND is missing?
=== Voltage ===
* Load sharing between USB and battery
=== Questions ===
# How does the switching regulator mechanism work?
# Do digital logic levels depend on source voltage? For example, let's say that a 3.3V powered device is sending signals to a 5V power device. From my understanding of the datasheet, a direct connection between the signals should not be allowed. However, why is this? What are the internal mechanisms of the pins that make it so that the logic levels need to be based off of the source voltage?
# I understand that there is a way to both use the battery as a source as well as to charge it using the USB. However, wouldn't this mean not being able to put a diode within the circuit directing current in one direction? Isn't this a hazard, destroying the circuit the moment a bit of noise comes through?
# Power Ground in the context of a switching regulator for our circuit to amplify 3.7 V battery to 5 V source. Do we just connect Power Ground to Battery Ground?
=== Notes ===
* Need to figure out the interface for the SPI lines using MCP2515
** Pros if successful:
*** Reduces amount of wiring for STM32, requires just one SPI bus with a bunch of easily programmable chip selects
** Cons: Need to understand the interface for the MCP2515. Need to confirm how interrupts and timing works.
090729d60af80d647261efe8c0e1e191647705d4
1312
1311
2024-05-28T01:17:04Z
Vedant
24
/* PCF2518 */
wikitext
text/x-wiki
=== MCU ===
* STM32
** Programming mode button (boot / reset)
=== USB ===
* adsfasdfasdf
=== Buttons ===
* Button debouncer
** TODO: Add the IC name
=== LCD ===
=== PCF2518 ===
* How do we connect I2C chip to the LCD screen?
=== Battery Charger ===
=== MCP 2515 ===
=== CAN ===
* How to get supply-side VCC without having it on the bus (from supply-side GND)?
* What do we do if supply-side GND is missing?
=== Voltage ===
* Load sharing between USB and battery
=== Questions ===
# How does the switching regulator mechanism work?
# Do digital logic levels depend on source voltage? For example, let's say that a 3.3V powered device is sending signals to a 5V power device. From my understanding of the datasheet, a direct connection between the signals should not be allowed. However, why is this? What are the internal mechanisms of the pins that make it so that the logic levels need to be based off of the source voltage?
# I understand that there is a way to both use the battery as a source as well as to charge it using the USB. However, wouldn't this mean not being able to put a diode within the circuit directing current in one direction? Isn't this a hazard, destroying the circuit the moment a bit of noise comes through?
# Power Ground in the context of a switching regulator for our circuit to amplify 3.7 V battery to 5 V source. Do we just connect Power Ground to Battery Ground?
=== Notes ===
* Need to figure out the interface for the SPI lines using MCP2515
** Pros if successful:
*** Reduces amount of wiring for STM32, requires just one SPI bus with a bunch of easily programmable chip selects
** Cons: Need to understand the interface for the MCP2515. Need to confirm how interrupts and timing works.
da951a035c608beab5223c6623408aeb88b09ead
1313
1312
2024-05-28T01:24:25Z
Vedant
24
wikitext
text/x-wiki
=== MCU ===
* STM32
** Programming mode button (boot / reset)
=== USB ===
* adsfasdfasdf
=== Buttons ===
* Button debouncer
** TODO: Add the IC name
=== LCD ===
=== PCF2518 ===
* How do we connect I2C chip to the LCD screen?
=== Battery Charger ===
=== MCP 2515 ===
=== Github access ===
=== CAN ===
* How to get supply-side VCC without having it on the bus (from supply-side GND)?
* What do we do if supply-side GND is missing?
=== Voltage ===
* Load sharing between USB and battery
=== Questions ===
# How does the switching regulator mechanism work?
# Do digital logic levels depend on source voltage? For example, let's say that a 3.3V powered device is sending signals to a 5V power device. From my understanding of the datasheet, a direct connection between the signals should not be allowed. However, why is this? What are the internal mechanisms of the pins that make it so that the logic levels need to be based off of the source voltage?
# I understand that there is a way to both use the battery as a source as well as to charge it using the USB. However, wouldn't this mean not being able to put a diode within the circuit directing current in one direction? Isn't this a hazard, destroying the circuit the moment a bit of noise comes through?
# Power Ground in the context of a switching regulator for our circuit to amplify 3.7 V battery to 5 V source. Do we just connect Power Ground to Battery Ground?
=== Notes ===
* Need to figure out the interface for the SPI lines using MCP2515
** Pros if successful:
*** Reduces amount of wiring for STM32, requires just one SPI bus with a bunch of easily programmable chip selects
** Cons: Need to understand the interface for the MCP2515. Need to confirm how interrupts and timing works.
60d263fdcf6ed36fd547dca8b9d59c3a9988bbd6
1314
1313
2024-05-28T07:28:51Z
Vedant
24
wikitext
text/x-wiki
=== MCU ===
* STM32
** Programming mode button (boot / reset)
Used:
STM32F407VET6
Justification: 512 kB of flash memory, unit price at $3, powerful enough to run heavy programs
=== USB ===
* adsfasdfasdf
=== Buttons ===
* Button debouncer
** TODO: Add the IC name
=== LCD ===
=== PCF2518 ===
* How do we connect I2C chip to the LCD screen?
=== Battery Charger ===
=== MCP 2515 ===
=== Github access ===
=== CAN ===
* How to get supply-side VCC without having it on the bus (from supply-side GND)?
* What do we do if supply-side GND is missing?
=== Voltage ===
* Load sharing between USB and battery
=== Questions ===
# How does the switching regulator mechanism work?
# Do digital logic levels depend on source voltage? For example, let's say that a 3.3V powered device is sending signals to a 5V power device. From my understanding of the datasheet, a direct connection between the signals should not be allowed. However, why is this? What are the internal mechanisms of the pins that make it so that the logic levels need to be based off of the source voltage?
# I understand that there is a way to both use the battery as a source as well as to charge it using the USB. However, wouldn't this mean not being able to put a diode within the circuit directing current in one direction? Isn't this a hazard, destroying the circuit the moment a bit of noise comes through?
# Power Ground in the context of a switching regulator for our circuit to amplify 3.7 V battery to 5 V source. Do we just connect Power Ground to Battery Ground?
=== Notes ===
* Need to figure out the interface for the SPI lines using MCP2515
** Pros if successful:
*** Reduces amount of wiring for STM32, requires just one SPI bus with a bunch of easily programmable chip selects
** Cons: Need to understand the interface for the MCP2515. Need to confirm how interrupts and timing works.
9765b636c42af3f8c478643e1674ed0264e79f11
1315
1314
2024-05-28T07:34:24Z
Vedant
24
wikitext
text/x-wiki
= Features =
# USB-C + Battery compatibility
# Able
=== Voltage ===
* Load sharing between USB and battery
= Central Components =
== MCU ==
= STM32 =
Used:
STM32F407VET6
Justification: 512 kB of flash memory, unit price at $3, powerful enough to run heavy programs
== Buttons + Debouncer ==
* Button Debouncer
** TODO: Add the IC name
== LCD ==
=== PCF2518 ===
* How do we connect I2C chip to the LCD screen?
== Battery Charger ==
=== MCP 2515 ===
== CAN Communication ==
=== MCP2515 ===
* How to get supply-side VCC without having it on the bus (from supply-side GND)?
* What do we do if supply-side GND is missing?
= Other =
== Questions ==
# How does the switching regulator mechanism work?
# Do digital logic levels depend on source voltage? For example, let's say that a 3.3V powered device is sending signals to a 5V power device. From my understanding of the datasheet, a direct connection between the signals should not be allowed. However, why is this? What are the internal mechanisms of the pins that make it so that the logic levels need to be based off of the source voltage?
# I understand that there is a way to both use the battery as a source as well as to charge it using the USB. However, wouldn't this mean not being able to put a diode within the circuit directing current in one direction? Isn't this a hazard, destroying the circuit the moment a bit of noise comes through?
# Power Ground in the context of a switching regulator for our circuit to amplify 3.7 V battery to 5 V source. Do we just connect Power Ground to Battery Ground?
== Notes ==
* Need to figure out the interface for the SPI lines using MCP2515
** Pros if successful:
*** Reduces amount of wiring for STM32, requires just one SPI bus with a bunch of easily programmable chip selects
** Cons: Need to understand the interface for the MCP2515. Need to confirm how interrupts and timing works.
110bb3a215ceb3f523f356cf9b72fc31f0344a95
1316
1315
2024-05-28T07:34:54Z
Vedant
24
wikitext
text/x-wiki
= Features =
# USB-C + Battery compatibility
# Able
=== Voltage ===
* Load sharing between USB and battery
= Central Components =
== MCU ==
=== STM32 ===
Used:
STM32F407VET6
Justification: 512 kB of flash memory, unit price at $3, powerful enough to run heavy programs
== Buttons + Debouncer ==
* Button Debouncer
** TODO: Add the IC name
== LCD ==
=== PCF2518 ===
* How do we connect I2C chip to the LCD screen?
== Battery Charger ==
=== MCP 2515 ===
== CAN Communication ==
=== MCP2515 ===
* How to get supply-side VCC without having it on the bus (from supply-side GND)?
* What do we do if supply-side GND is missing?
= Other =
== Questions ==
# How does the switching regulator mechanism work?
# Do digital logic levels depend on source voltage? For example, let's say that a 3.3V powered device is sending signals to a 5V power device. From my understanding of the datasheet, a direct connection between the signals should not be allowed. However, why is this? What are the internal mechanisms of the pins that make it so that the logic levels need to be based off of the source voltage?
# I understand that there is a way to both use the battery as a source as well as to charge it using the USB. However, wouldn't this mean not being able to put a diode within the circuit directing current in one direction? Isn't this a hazard, destroying the circuit the moment a bit of noise comes through?
# Power Ground in the context of a switching regulator for our circuit to amplify 3.7 V battery to 5 V source. Do we just connect Power Ground to Battery Ground?
== Notes ==
* Need to figure out the interface for the SPI lines using MCP2515
** Pros if successful:
*** Reduces amount of wiring for STM32, requires just one SPI bus with a bunch of easily programmable chip selects
** Cons: Need to understand the interface for the MCP2515. Need to confirm how interrupts and timing works.
a272a138723ba1dd3057aa25403caedaf4eb526f
1317
1316
2024-05-28T07:35:47Z
Vedant
24
wikitext
text/x-wiki
= Features =
# USB-C + Battery compatibility
# Able
=== Voltage ===
* Load sharing between USB and battery
= Central Components =
== MCU ==
=== STM32 ===
Used:
STM32F407VET6
Justification: 512 kB of flash memory, unit price at $3, powerful enough to run heavy programs
== Buttons + Debouncer ==
* Button Debouncer
** TODO: Add the IC name
== LCD ==
=== PCF2518 ===
* How do we connect I2C chip to the LCD screen?
== Battery Charger ==
=== MCP 2515 ===
== CAN Communication ==
=== MCP2515 ===
* How to get supply-side VCC without having it on the bus (from supply-side GND)?
* What do we do if supply-side GND is missing?
= Design =
== Schematic ==
== Layout ==
== Final Form Factor ==
= Other =
== Questions ==
# How does the switching regulator mechanism work?
# Do digital logic levels depend on source voltage? For example, let's say that a 3.3V powered device is sending signals to a 5V power device. From my understanding of the datasheet, a direct connection between the signals should not be allowed. However, why is this? What are the internal mechanisms of the pins that make it so that the logic levels need to be based off of the source voltage?
# I understand that there is a way to both use the battery as a source as well as to charge it using the USB. However, wouldn't this mean not being able to put a diode within the circuit directing current in one direction? Isn't this a hazard, destroying the circuit the moment a bit of noise comes through?
# Power Ground in the context of a switching regulator for our circuit to amplify 3.7 V battery to 5 V source. Do we just connect Power Ground to Battery Ground?
== Notes ==
* Need to figure out the interface for the SPI lines using MCP2515
** Pros if successful:
*** Reduces amount of wiring for STM32, requires just one SPI bus with a bunch of easily programmable chip selects
** Cons: Need to understand the interface for the MCP2515. Need to confirm how interrupts and timing works.
e58654874723a9af6fa33d8c927be95534602fb2
1318
1317
2024-05-28T07:36:18Z
Vedant
24
/* CAN Communication */
wikitext
text/x-wiki
= Features =
# USB-C + Battery compatibility
# Able
=== Voltage ===
* Load sharing between USB and battery
= Central Components =
== MCU ==
=== STM32 ===
Used:
STM32F407VET6
Justification: 512 kB of flash memory, unit price at $3, powerful enough to run heavy programs
== Buttons + Debouncer ==
* Button Debouncer
** TODO: Add the IC name
== LCD ==
=== PCF2518 ===
* How do we connect I2C chip to the LCD screen?
== Battery Charger ==
=== MCP 2515 ===
== CAN Communication ==
=== MCP2515 ===
* How to get supply-side VCC without having it on the bus (from supply-side GND)?
* What do we do if supply-side GND is missing?
= Design =
== Schematic ==
== Layout ==
== Final Form Factor ==
= Other =
== Questions ==
# How does the switching regulator mechanism work?
# Do digital logic levels depend on source voltage? For example, let's say that a 3.3V powered device is sending signals to a 5V power device. From my understanding of the datasheet, a direct connection between the signals should not be allowed. However, why is this? What are the internal mechanisms of the pins that make it so that the logic levels need to be based off of the source voltage?
# I understand that there is a way to both use the battery as a source as well as to charge it using the USB. However, wouldn't this mean not being able to put a diode within the circuit directing current in one direction? Isn't this a hazard, destroying the circuit the moment a bit of noise comes through?
# Power Ground in the context of a switching regulator for our circuit to amplify 3.7 V battery to 5 V source. Do we just connect Power Ground to Battery Ground?
== Notes ==
* Need to figure out the interface for the SPI lines using MCP2515
** Pros if successful:
*** Reduces amount of wiring for STM32, requires just one SPI bus with a bunch of easily programmable chip selects
** Cons: Need to understand the interface for the MCP2515. Need to confirm how interrupts and timing works.
350c104dc359246468d54f276aa4e0cdc4e49b74
1319
1318
2024-05-28T07:36:29Z
Vedant
24
/* MCP 2515 */
wikitext
text/x-wiki
= Features =
# USB-C + Battery compatibility
# Able
=== Voltage ===
* Load sharing between USB and battery
= Central Components =
== MCU ==
=== STM32 ===
Used:
STM32F407VET6
Justification: 512 kB of flash memory, unit price at $3, powerful enough to run heavy programs
== Buttons + Debouncer ==
* Button Debouncer
** TODO: Add the IC name
== LCD ==
=== PCF2518 ===
* How do we connect I2C chip to the LCD screen?
== Battery Charger ==
=== MCP 2515 ===
== CAN Communication ==
=== MCP2515 ===
* How to get supply-side VCC without having it on the bus (from supply-side GND)?
* What do we do if supply-side GND is missing?
= Design =
== Schematic ==
== Layout ==
== Final Form Factor ==
= Other =
== Questions ==
# How does the switching regulator mechanism work?
# Do digital logic levels depend on source voltage? For example, let's say that a 3.3V powered device is sending signals to a 5V power device. From my understanding of the datasheet, a direct connection between the signals should not be allowed. However, why is this? What are the internal mechanisms of the pins that make it so that the logic levels need to be based off of the source voltage?
# I understand that there is a way to both use the battery as a source as well as to charge it using the USB. However, wouldn't this mean not being able to put a diode within the circuit directing current in one direction? Isn't this a hazard, destroying the circuit the moment a bit of noise comes through?
# Power Ground in the context of a switching regulator for our circuit to amplify 3.7 V battery to 5 V source. Do we just connect Power Ground to Battery Ground?
== Notes ==
* Need to figure out the interface for the SPI lines using MCP2515
** Pros if successful:
*** Reduces amount of wiring for STM32, requires just one SPI bus with a bunch of easily programmable chip selects
** Cons: Need to understand the interface for the MCP2515. Need to confirm how interrupts and timing works.
e26b1f9d788ebd80598bce318e126bc7133e816b
1320
1319
2024-05-28T07:36:41Z
Vedant
24
/* Battery Charger */
wikitext
text/x-wiki
= Features =
# USB-C + Battery compatibility
# Able
=== Voltage ===
* Load sharing between USB and battery
= Central Components =
== MCU ==
=== STM32 ===
Used:
STM32F407VET6
Justification: 512 kB of flash memory, unit price at $3, powerful enough to run heavy programs
== Buttons + Debouncer ==
* Button Debouncer
** TODO: Add the IC name
== LCD ==
=== PCF2518 ===
* How do we connect I2C chip to the LCD screen?
== Battery Charger ==
== CAN Communication ==
=== MCP2515 ===
* How to get supply-side VCC without having it on the bus (from supply-side GND)?
* What do we do if supply-side GND is missing?
= Design =
== Schematic ==
== Layout ==
== Final Form Factor ==
= Other =
== Questions ==
# How does the switching regulator mechanism work?
# Do digital logic levels depend on source voltage? For example, let's say that a 3.3V powered device is sending signals to a 5V power device. From my understanding of the datasheet, a direct connection between the signals should not be allowed. However, why is this? What are the internal mechanisms of the pins that make it so that the logic levels need to be based off of the source voltage?
# I understand that there is a way to both use the battery as a source as well as to charge it using the USB. However, wouldn't this mean not being able to put a diode within the circuit directing current in one direction? Isn't this a hazard, destroying the circuit the moment a bit of noise comes through?
# Power Ground in the context of a switching regulator for our circuit to amplify 3.7 V battery to 5 V source. Do we just connect Power Ground to Battery Ground?
== Notes ==
* Need to figure out the interface for the SPI lines using MCP2515
** Pros if successful:
*** Reduces amount of wiring for STM32, requires just one SPI bus with a bunch of easily programmable chip selects
** Cons: Need to understand the interface for the MCP2515. Need to confirm how interrupts and timing works.
1f6293006629245e4e0d9e037f80d5074af1b2f0
1321
1320
2024-05-28T07:38:08Z
Vedant
24
wikitext
text/x-wiki
= Features =
# USB-C + Battery compatibility
# Able
=== Voltage ===
* Load sharing between USB and battery
= Central Components =
== MCU ==
=== STM32 ===
Used:
STM32F407VET6
Justification: 512 kB of flash memory, unit price at $3, powerful enough to run heavy programs
== Buttons + Debouncer ==
* Button Debouncer
** TODO: Add the IC name
== LCD ==
=== PCF2518 ===
* How do we connect I2C chip to the LCD screen?
== Battery Charger ==
== CAN Communication ==
=== MCP2515 ===
* How to get supply-side VCC without having it on the bus (from supply-side GND)?
* What do we do if supply-side GND is missing?
= Design =
== Schematic ==
== Layout ==
== Final Form Factor ==
= Other =
== Questions ==
# How does the switching regulator mechanism work?
# Do digital logic levels depend on source voltage? For example, let's say that a 3.3V powered device is sending signals to a 5V power device. From my understanding of the datasheet, a direct connection between the signals should not be allowed. However, why is this? What are the internal mechanisms of the pins that make it so that the logic levels need to be based off of the source voltage?
# I understand that there is a way to both use the battery as a source as well as to charge it using the USB. However, wouldn't this mean not being able to put a diode within the circuit directing current in one direction? Isn't this a hazard, destroying the circuit the moment a bit of noise comes through?
# Power Ground in the context of a switching regulator for our circuit to amplify 3.7 V battery to 5 V source. Do we just connect Power Ground to Battery Ground?
== Notes ==
* Need to figure out the interface for the SPI lines using MCP2515
** Pros if successful:
*** Reduces amount of wiring for STM32, requires just one SPI bus with a bunch of easily programmable chip selects
** Cons: Need to understand the interface for the MCP2515. Need to confirm how interrupts and timing works.
== Github Repository ==
2e90c40af66f70ca693545f09b8242ef93b83aa5
Wireless Modules
0
292
1322
2024-05-28T07:45:02Z
Vedant
24
Created page with "= Justification = Robust, wireless communication between multiple PCB boards decreased design constraints and wiring, allowing for much more modular and adjustable designs ca..."
wikitext
text/x-wiki
= Justification =
Robust, wireless communication between multiple PCB boards decreased design constraints and wiring, allowing for much more modular and adjustable designs capable. This would allow for mobility, ease of communication, reduced clutter, and scalability.
= Current Methods of Integration =
== ESP32 Built-In ==
By connecting
34c7396f7f0dbb10f02fd7543e05c40e59f56b8a
1323
1322
2024-05-28T08:30:55Z
Vedant
24
/* ESP32 Built-In */
wikitext
text/x-wiki
= Justification =
Robust, wireless communication between multiple PCB boards decreased design constraints and wiring, allowing for much more modular and adjustable designs capable. This would allow for mobility, ease of communication, reduced clutter, and scalability.
= Current Methods of Integration =
= ESP32 Built-In =
== Connections ==
=== UART Communication ===
* Hook up
=== I2C Communication ===
=== SPI Communication ===
== Firmware ==
== Examples ==
[https://www.digikey.com/en/products/detail/espressif-systems/ESP32-C3-MINI-1-N4/13877574 ESP32-C3-MINI-1-N4]
* ESP32 Chip adjusted specifically for the sole purpose of wireless communication.
* Interfaced with using UART
*
== Pros and Cons ==
=== Pros ===
=== Cons ===
= Bluetooth Modules =
== Component Selection ==
=== HC_05/HC_06 ===
=== CYM4343W ===
* Contains both Bluetooth as well as Wifi protocols
* Uses UART for Bluetooth
* Uses SDIO for wifi communication
== Pros and Cons ==
=== Pros ===
* Simple implementation with supported Code
=== Cons ===
* Separate PCB Board interfaced with
== TODO ==
Look into the specific components of the Bluetooth modules to find ICS that can be easily used and interfaced with.
=
af2fbe84fb64a2da7a18e203cd3422a15daa3df4
1324
1323
2024-05-28T08:31:13Z
Vedant
24
/* CYM4343W */
wikitext
text/x-wiki
= Justification =
Robust, wireless communication between multiple PCB boards decreased design constraints and wiring, allowing for much more modular and adjustable designs capable. This would allow for mobility, ease of communication, reduced clutter, and scalability.
= Current Methods of Integration =
= ESP32 Built-In =
== Connections ==
=== UART Communication ===
* Hook up
=== I2C Communication ===
=== SPI Communication ===
== Firmware ==
== Examples ==
[https://www.digikey.com/en/products/detail/espressif-systems/ESP32-C3-MINI-1-N4/13877574 ESP32-C3-MINI-1-N4]
* ESP32 Chip adjusted specifically for the sole purpose of wireless communication.
* Interfaced with using UART
*
== Pros and Cons ==
=== Pros ===
=== Cons ===
= Bluetooth Modules =
== Component Selection ==
=== HC_05/HC_06 ===
=== CYM4343W ===
Contains both Bluetooth as well as Wifi protocols
Uses UART for Bluetooth
Uses SDIO for wifi communication
== Pros and Cons ==
=== Pros ===
* Simple implementation with supported Code
=== Cons ===
* Separate PCB Board interfaced with
== TODO ==
Look into the specific components of the Bluetooth modules to find ICS that can be easily used and interfaced with.
=
187555a6084684454a7dfee2ec9f00f1439da486
1325
1324
2024-05-28T08:32:33Z
Vedant
24
/* Pros */
wikitext
text/x-wiki
= Justification =
Robust, wireless communication between multiple PCB boards decreased design constraints and wiring, allowing for much more modular and adjustable designs capable. This would allow for mobility, ease of communication, reduced clutter, and scalability.
= Current Methods of Integration =
= ESP32 Built-In =
== Connections ==
=== UART Communication ===
* Hook up
=== I2C Communication ===
=== SPI Communication ===
== Firmware ==
== Examples ==
[https://www.digikey.com/en/products/detail/espressif-systems/ESP32-C3-MINI-1-N4/13877574 ESP32-C3-MINI-1-N4]
* ESP32 Chip adjusted specifically for the sole purpose of wireless communication.
* Interfaced with using UART
*
== Pros and Cons ==
=== Pros ===
Allows for convenient, easy to mount access to wireless communication.
Easy to interface with, using UART in order to communicate data.
=== Cons ===
= Bluetooth Modules =
== Component Selection ==
=== HC_05/HC_06 ===
=== CYM4343W ===
Contains both Bluetooth as well as Wifi protocols
Uses UART for Bluetooth
Uses SDIO for wifi communication
== Pros and Cons ==
=== Pros ===
* Simple implementation with supported Code
=== Cons ===
* Separate PCB Board interfaced with
== TODO ==
Look into the specific components of the Bluetooth modules to find ICS that can be easily used and interfaced with.
=
6018eefdbd9b30b0377a251623bd7bd794c7861f
1326
1325
2024-05-28T08:32:46Z
Vedant
24
/* Pros and Cons */
wikitext
text/x-wiki
= Justification =
Robust, wireless communication between multiple PCB boards decreased design constraints and wiring, allowing for much more modular and adjustable designs capable. This would allow for mobility, ease of communication, reduced clutter, and scalability.
= Current Methods of Integration =
= ESP32 Built-In =
== Connections ==
=== UART Communication ===
* Hook up
=== I2C Communication ===
=== SPI Communication ===
== Firmware ==
== Examples ==
[https://www.digikey.com/en/products/detail/espressif-systems/ESP32-C3-MINI-1-N4/13877574 ESP32-C3-MINI-1-N4]
* ESP32 Chip adjusted specifically for the sole purpose of wireless communication.
* Interfaced with using UART
*
== Pros and Cons ==
=== Pros ===
Allows for convenient, easy to mount access to wireless communication.
Easy to interface with, using UART in order to communicate data.
=== Cons ===
= Bluetooth Modules =
== Component Selection ==
=== HC_05/HC_06 ===
=== CYM4343W ===
Contains both Bluetooth as well as Wifi protocols
Uses UART for Bluetooth
Uses SDIO for wifi communication
== Pros and Cons ==
=== Pros ===
* Simple implementation with supported Code
=== Cons ===
* Separate PCB Board interfaced with/ Not modular.
== TODO ==
Look into the specific components of the Bluetooth modules to find ICS that can be easily used and interfaced with.
=
7fcc24671a00df8882f9f446e80c65ed5d4b8545
1327
1326
2024-05-28T08:32:55Z
Vedant
24
/* TODO */
wikitext
text/x-wiki
= Justification =
Robust, wireless communication between multiple PCB boards decreased design constraints and wiring, allowing for much more modular and adjustable designs capable. This would allow for mobility, ease of communication, reduced clutter, and scalability.
= Current Methods of Integration =
= ESP32 Built-In =
== Connections ==
=== UART Communication ===
* Hook up
=== I2C Communication ===
=== SPI Communication ===
== Firmware ==
== Examples ==
[https://www.digikey.com/en/products/detail/espressif-systems/ESP32-C3-MINI-1-N4/13877574 ESP32-C3-MINI-1-N4]
* ESP32 Chip adjusted specifically for the sole purpose of wireless communication.
* Interfaced with using UART
*
== Pros and Cons ==
=== Pros ===
Allows for convenient, easy to mount access to wireless communication.
Easy to interface with, using UART in order to communicate data.
=== Cons ===
= Bluetooth Modules =
== Component Selection ==
=== HC_05/HC_06 ===
=== CYM4343W ===
Contains both Bluetooth as well as Wifi protocols
Uses UART for Bluetooth
Uses SDIO for wifi communication
== Pros and Cons ==
=== Pros ===
* Simple implementation with supported Code
=== Cons ===
* Separate PCB Board interfaced with/ Not modular.
== TODO ==
Look into the specific components of the Bluetooth modules to find ICS that can be easily used and interfaced with.
c4ec65a49c5e401513b57bba0fca23e30f3de9b8
1328
1327
2024-05-28T08:34:54Z
Vedant
24
/* UART Communication */
wikitext
text/x-wiki
= Justification =
Robust, wireless communication between multiple PCB boards decreased design constraints and wiring, allowing for much more modular and adjustable designs capable. This would allow for mobility, ease of communication, reduced clutter, and scalability.
= Current Methods of Integration =
= ESP32 Built-In =
== Connections ==
=== UART Communication ===
* Hook up
=== I2C Communication ===
=== SPI Communication ===
== Firmware ==
== Examples ==
[https://www.digikey.com/en/products/detail/espressif-systems/ESP32-C3-MINI-1-N4/13877574 ESP32-C3-MINI-1-N4]
* ESP32 Chip adjusted specifically for the sole purpose of wireless communication.
* Interfaced with using UART
*
== Pros and Cons ==
=== Pros ===
Allows for convenient, easy to mount access to wireless communication.
Easy to interface with, using UART in order to communicate data.
=== Cons ===
= Bluetooth Modules =
== Component Selection ==
=== HC_05/HC_06 ===
=== CYM4343W ===
Contains both Bluetooth as well as Wifi protocols
Uses UART for Bluetooth
Uses SDIO for wifi communication
== Pros and Cons ==
=== Pros ===
* Simple implementation with supported Code
=== Cons ===
* Separate PCB Board interfaced with/ Not modular.
== TODO ==
Look into the specific components of the Bluetooth modules to find ICS that can be easily used and interfaced with.
0dc652df614e57c69c9c7c4a58288dca10ce6a85
1329
1328
2024-05-28T08:36:23Z
Vedant
24
/* Examples */
wikitext
text/x-wiki
= Justification =
Robust, wireless communication between multiple PCB boards decreased design constraints and wiring, allowing for much more modular and adjustable designs capable. This would allow for mobility, ease of communication, reduced clutter, and scalability.
= Current Methods of Integration =
= ESP32 Built-In =
== Connections ==
=== UART Communication ===
* Hook up
=== I2C Communication ===
=== SPI Communication ===
== Firmware ==
== Examples ==
[https://www.digikey.com/en/products/detail/espressif-systems/ESP32-C3-MINI-1-N4/13877574 ESP32-C3-MINI-1-N4]
* ESP32 Chip adjusted specifically for the sole purpose of wireless communication.
* Interfaced with using UART
*
[https://www.digikey.com/en/products/detail/espressif-systems/ESP32-WROOM-32-N4/8544298 ESP32-WROOM-32]
* uses UART To both receive and send data to other modules
== Pros and Cons ==
=== Pros ===
Allows for convenient, easy to mount access to wireless communication.
Easy to interface with, using UART in order to communicate data.
=== Cons ===
= Bluetooth Modules =
== Component Selection ==
=== HC_05/HC_06 ===
=== CYM4343W ===
Contains both Bluetooth as well as Wifi protocols
Uses UART for Bluetooth
Uses SDIO for wifi communication
== Pros and Cons ==
=== Pros ===
* Simple implementation with supported Code
=== Cons ===
* Separate PCB Board interfaced with/ Not modular.
== TODO ==
Look into the specific components of the Bluetooth modules to find ICS that can be easily used and interfaced with.
0843c162296492f539fb687b120cdb729a403530
1330
1329
2024-05-28T08:52:50Z
Vedant
24
wikitext
text/x-wiki
= Justification =
Robust, wireless communication between multiple PCB boards decreased design constraints and wiring, allowing for much more modular and adjustable designs capable. This would allow for mobility, ease of communication, reduced clutter, and scalability. Could potentially
= Current Methods of Integration =
= ESP32 Built-In =
== Connections ==
=== UART Communication ===
* Hook up UART_TX and UART_RX to STM32.
=== I2C Communication ===
=== SPI Communication ===
== Firmware ==
== Examples ==
[https://www.digikey.com/en/products/detail/espressif-systems/ESP32-C3-MINI-1-N4/13877574 ESP32-C3-MINI-1-N4]
* ESP32 Chip adjusted specifically for the sole purpose of wireless communication.
* Interfaced with using UART
*
[https://www.digikey.com/en/products/detail/espressif-systems/ESP32-WROOM-32-N4/8544298 ESP32-WROOM-32]
* uses UART To both receive and send data to other modules
== Pros and Cons ==
=== Pros ===
Allows for convenient, easy to mount access to wireless communication.
Easy to interface with, using UART in order to communicate data.
=== Cons ===
= Bluetooth Modules =
== Component Selection ==
=== HC_05/HC_06 ===
=== CYM4343W ===
Contains both Bluetooth as well as Wifi protocols
Uses UART for Bluetooth
Uses SDIO for wifi communication
== Pros and Cons ==
=== Pros ===
* Simple implementation with supported Code
=== Cons ===
* Separate PCB Board interfaced with/ Not modular.
== TODO ==
Look into the specific components of the Bluetooth modules to find ICS that can be easily used and interfaced with.
Figure out how to interface with ESP32 software using STM32, how to send specific commands, etc.
= Interfacing with Other Boards =
== Network Setup ==
=== Access Point ===
Set up one device as the master. Intended purpose of the CANdaddy. Host signal and allow other devices to connect using SSID and password. In the case of the ESP32, this is done through built in libraries that allow for
== Communication Protocol ==
== Data Exchange ==
dfcf9693ffb8df5206e24919c5aca4e5bad9a8ff
Dennis' Speech Project
0
293
1331
2024-05-28T17:21:27Z
Ben
2
Created page with "=== Papers === * [https://distill.pub/2017/ctc/ CTC] * [https://arxiv.org/abs/1810.04805 BERT] * [https://arxiv.org/abs/2006.11477 wav2vec 2.0] * [https://arxiv.org/abs/2106...."
wikitext
text/x-wiki
=== Papers ===
* [https://distill.pub/2017/ctc/ CTC]
* [https://arxiv.org/abs/1810.04805 BERT]
* [https://arxiv.org/abs/2006.11477 wav2vec 2.0]
* [https://arxiv.org/abs/2106.07447 HuBERT]
* [https://speechbot.github.io/ Textless NLP project]
* [https://arxiv.org/abs/2210.13438 Encodec]
* [https://arxiv.org/abs/2308.16692 SpeechTokenizer]
* [https://github.com/suno-ai/bark Suno Bark Model]
6db6779ee74eccd1c8ec7c5b8e1848ec99c9f73c
Robot Descriptions List
0
281
1332
1230
2024-05-28T19:41:00Z
Vrtnis
21
/*Add ANYmal, HRP-2*/
wikitext
text/x-wiki
=== Humanoids ===
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| TIAGo || PAL Robotics || URDF || [https://github.com/pal-robotics/tiago_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| ANYmal || ANYbotics || URDF, SDF || [https://github.com/ANYbotics/anymal_b_simple_description URDF], [https://github.com/ANYbotics/anymal_c_simple_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| HRP-2 || Kawada Robotics || URDF || [https://github.com/start-jsk/rtmros_common/tree/master/hrp2_models URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| HRP-4 || Kawada Robotics || URDF || [https://github.com/start-jsk/rtmros_common/tree/master/hrp4_models URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
5e39cff184d6b5c0430b0f5095ce4898d35717cb
1333
1332
2024-05-28T19:41:20Z
Vrtnis
21
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| TIAGo || PAL Robotics || URDF || [https://github.com/pal-robotics/tiago_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| ANYmal || ANYbotics || URDF, SDF || [https://github.com/ANYbotics/anymal_b_simple_description URDF], [https://github.com/ANYbotics/anymal_c_simple_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| HRP-2 || Kawada Robotics || URDF || [https://github.com/start-jsk/rtmros_common/tree/master/hrp2_models URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| HRP-4 || Kawada Robotics || URDF || [https://github.com/start-jsk/rtmros_common/tree/master/hrp4_models URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
c9811ef97fc6bbfd00ce0c139ce5df3c02e00335
1334
1333
2024-05-28T19:46:12Z
Vrtnis
21
/*Add new robots and references*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| TIAGo || PAL Robotics || URDF || [https://github.com/pal-robotics/tiago_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| ANYmal || ANYbotics || URDF, SDF || [https://github.com/ANYbotics/anymal_b_simple_description URDF], [https://github.com/ANYbotics/anymal_c_simple_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| HRP-2 || Kawada Robotics || URDF || [https://github.com/start-jsk/rtmros_common/tree/master/hrp2_models URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| HRP-4 || Kawada Robotics || URDF || [https://github.com/start-jsk/rtmros_common/tree/master/hrp4_models URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
0e16ea5381eaeb2fa8f7d85ad19dbde99b3d7bfe
1335
1334
2024-05-28T19:54:30Z
Vrtnis
21
/*Add Rethink Baxter*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
f354b8f98c5441b8f55769fe99fec01f17cbb3e2
1336
1335
2024-05-28T19:57:51Z
Vrtnis
21
/*Add Baxter End Effector*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
0e920bc7e3de9092f18d4cbdc2384be9b8520aa8
1337
1336
2024-05-28T20:03:18Z
Vrtnis
21
/* Add Pepper */
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
892c00e87a797332d795f01ba7a19104c02a5de0
1338
1337
2024-05-28T20:05:11Z
Vrtnis
21
/*Add Pepper references*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
d49ec4e30e74e2b51ecdb88dee90cb73b439022a
1339
1338
2024-05-28T20:07:16Z
Vrtnis
21
/*Add Mini-Cheetah*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
a36bdae9e39c24c601c2836fbc05210e67f16a40
1340
1339
2024-05-28T20:07:58Z
Vrtnis
21
/*Add Mini-Cheetah*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
fb835c3a528cd1aa34abd13f60ff79b9bbd245e8
1341
1340
2024-05-28T20:09:33Z
Vrtnis
21
/* Add Thor Mang */
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
f1f863aa3936f9ca3c519e584d231cdcc96c8f75
1342
1341
2024-05-28T20:12:24Z
Vrtnis
21
/* Baxter End Effectors */
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
76c3778e1bff91bc8248fd4aa0e15af72fceb87c
1343
1342
2024-05-28T20:25:59Z
Vrtnis
21
/*Asimo, Sophia, Cassie*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
ef5331d02e39383d8959448b0053be84c84829e3
1344
1343
2024-05-29T00:45:40Z
Vrtnis
21
/*Add Kawada Robotics*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
c1713f349ff441644e9bf91cec89f521d2498737
1347
1344
2024-05-29T04:25:41Z
Vrtnis
21
/* Add NASA Valkyrie*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
f662b7edaf0ecbf806b2546efaf7c6bdd8a9b7c8
1348
1347
2024-05-29T04:26:58Z
Vrtnis
21
/* Add REEM-C */
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
149f48299863f1b865c8c263ff7721ae06c0e716
K-Scale Weekly Progress Updates
0
294
1345
2024-05-29T02:46:19Z
Ben
2
Created page with "{| class="wikitable" |- ! Link |- | [https://twitter.com/kscalelabs/status/1788968705378181145 2024.05.10] |- | [https://x.com/kscalelabs/status/1791507358780461496 2024.05.17..."
wikitext
text/x-wiki
{| class="wikitable"
|-
! Link
|-
| [https://twitter.com/kscalelabs/status/1788968705378181145 2024.05.10]
|-
| [https://x.com/kscalelabs/status/1791507358780461496 2024.05.17]
|-
| [https://x.com/kscalelabs/status/1794109131214712914 2024.05.24]
|}
2047a5c01d3fe9522dadc6b7f119a591d9f2d1b6
1346
1345
2024-05-29T03:12:28Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Link
|-
| [https://twitter.com/kscalelabs/status/1788968705378181145 2024.05.10]
|-
| [https://x.com/kscalelabs/status/1791507358780461496 2024.05.17]
|-
| [https://x.com/kscalelabs/status/1794109131214712914 2024.05.24]
|}
[[Category:K-Scale]]
6eba57fc29aa717215ff25a4ef681f43fb83a5db
Main Page
0
1
1349
1286
2024-05-29T04:38:34Z
Vrtnis
21
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
09843c8b175405fd4665cfad14c904de6b87ff8b
1350
1349
2024-05-29T04:41:02Z
Vrtnis
21
/* Add Stanford Robotics Lab */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
ab75e3ff270ec6bfe8e258b5c1d0d888cc982638
1355
1350
2024-05-29T17:34:31Z
Ben
2
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
e590bc3a348ff7bbce568e0bbf507b64d1b2b376
Stanford Robotics Lab
0
295
1351
2024-05-29T04:52:45Z
Vrtnis
21
/*Add Stanford Robotics Lab*/
wikitext
text/x-wiki
The Stanford Robotics Lab, part of the Stanford Artificial Intelligence Laboratory (SAIL), is a research facility dedicated to the advancement of robotics technology. Located within Stanford University, the lab focuses on a wide range of research areas, including autonomous systems, human-robot interaction, robotic manipulation, and machine learning applications in robotics.
The lab's interdisciplinary approach combines expertise from various fields such as computer science, electrical engineering, mechanical engineering, and cognitive sciences to develop innovative robotic solutions. Researchers at the Stanford Robotics Lab work on projects that aim to improve the capabilities and intelligence of robots, enabling them to perform complex tasks in dynamic and unstructured environments.
Notable achievements of the lab include significant contributions to the development of autonomous vehicles, advanced prosthetics, and collaborative robots that can safely and efficiently work alongside humans. The lab also emphasizes real-world applications and often collaborates with industry partners to bring cutting-edge research into practical use.
[https://robotics.stanford.edu Stanford Robotics Lab website]
05592a2cd3fd83d7fe2c00e8518c0e921d364eaa
OceanOneK
0
296
1352
2024-05-29T04:55:44Z
Vrtnis
21
/*Add OcenOneK*/
wikitext
text/x-wiki
'''OceanOneK''' is an advanced underwater humanoid robot developed by the Stanford Robotics Lab.
OceanOneK is equipped with stereoscopic vision, haptic feedback, and fully articulated arms and hands, allowing operators to perform delicate and precise tasks in underwater settings.
Key features of OceanOneK include:
* '''Stereoscopic Vision''': Provides operators with a clear, three-dimensional view of the underwater environment, enhancing situational awareness and precision.
* '''Haptic Feedback''': Allows operators to feel the force and resistance encountered by the robot's hands, providing a tactile sense of the underwater objects being manipulated.
* '''Articulated Arms and Hands''': Enable the robot to perform complex tasks such as collecting samples, repairing equipment, and interacting with delicate marine life.
[https://robotics.stanford.edu/oceanonek OceanOneK project page]
51b9301c2d1441827aa5017f87c91475de12074d
1353
1352
2024-05-29T05:10:09Z
Vrtnis
21
/*Add infobox*/
wikitext
text/x-wiki
{{Infobox robot
| name = OceanOneK
| image =
| caption = OceanOneK robot during a deep-sea mission
| country = United States
| year_of_creation = 2022
| developer = Stanford Robotics Lab
| type = Underwater Humanoid Robot
| purpose = Deep-sea exploration, Archaeology, Environmental monitoring
| weight = 235 kg
| height = 1.8 meters
| manipulator = Fully articulated arms and hands
| vision = Stereoscopic vision
| feedback = Haptic feedback
| control = Remote operator
| website = [https://robotics.stanford.edu/oceanonek OceanOneK project page]
}}
'''OceanOneK''' is an advanced underwater humanoid robot developed by the Stanford Robotics Lab.
OceanOneK is equipped with stereoscopic vision, haptic feedback, and fully articulated arms and hands, allowing operators to perform delicate and precise tasks in underwater settings.
Key features of OceanOneK include:
* '''Stereoscopic Vision''': Provides operators with a clear, three-dimensional view of the underwater environment, enhancing situational awareness and precision.
* '''Haptic Feedback''': Allows operators to feel the force and resistance encountered by the robot's hands, providing a tactile sense of the underwater objects being manipulated.
* '''Articulated Arms and Hands''': Enable the robot to perform complex tasks such as collecting samples, repairing equipment, and interacting with delicate marine life.
[https://robotics.stanford.edu/oceanonek OceanOneK project page]
8e03d41f301eb1f2a422b67ea5c54aaa0ee82f01
1354
1353
2024-05-29T05:11:42Z
Vrtnis
21
wikitext
text/x-wiki
{{Infobox robot
| name = OceanOneK
| organization = [[Stanford Robotics Lab]]
| height = 1.8 meters
| weight = 235 kg
| purpose = Deep-sea exploration, Archaeology, Environmental monitoring
| manipulator = Fully articulated arms and hands
| vision = Stereoscopic vision
| feedback = Haptic feedback
| control = Remote operator
| website = https://robotics.stanford.edu/oceanonek
}}
'''OceanOneK''' is an advanced underwater humanoid robot developed by the Stanford Robotics Lab.
OceanOneK is equipped with stereoscopic vision, haptic feedback, and fully articulated arms and hands, allowing operators to perform delicate and precise tasks in underwater settings.
Key features of OceanOneK include:
* '''Stereoscopic Vision''': Provides operators with a clear, three-dimensional view of the underwater environment, enhancing situational awareness and precision.
* '''Haptic Feedback''': Allows operators to feel the force and resistance encountered by the robot's hands, providing a tactile sense of the underwater objects being manipulated.
* '''Articulated Arms and Hands''': Enable the robot to perform complex tasks such as collecting samples, repairing equipment, and interacting with delicate marine life.
[https://robotics.stanford.edu/oceanonek OceanOneK project page]
a9365161e4cd733547c238030271693c5fcc9e3f
WorkFar
0
297
1356
2024-05-29T17:35:21Z
Ben
2
Created page with "WorkFar is a US-based company building a humanoid robot for factory automation. {{infobox company | name = WorkFar | country = USA | website_link = https://www.workfar.com/ |..."
wikitext
text/x-wiki
WorkFar is a US-based company building a humanoid robot for factory automation.
{{infobox company
| name = WorkFar
| country = USA
| website_link = https://www.workfar.com/
| robots = [[WorkFar Syntro]]
}}
[[Category:Companies]]
d20c9cd6994a53455dd2e36e6da000de55387886
WorkFar Syntro
0
298
1357
2024-05-29T17:36:30Z
Ben
2
Created page with "The '''Syntro''' robot from [[WorkFar]] is designed for advanced warehouse automation. {{infobox robot | name = Syntro | organization = [[WorkFar]] | video_link = https://ww..."
wikitext
text/x-wiki
The '''Syntro''' robot from [[WorkFar]] is designed for advanced warehouse automation.
{{infobox robot
| name = Syntro
| organization = [[WorkFar]]
| video_link = https://www.youtube.com/watch?v=suF7mEtLJvY
| cost =
| height =
| weight =
| speed =
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof =
| status =
}}
[[Category:Robots]]
a1972a23ba3237855ac71e9086d80612bdafa18a
Robot Descriptions List
0
281
1358
1348
2024-05-29T22:34:38Z
Vrtnis
21
/*Add Darwin OP*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
-
| Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
c6a269feb47bdb0f4fc7dfc2f57ab846dfd00ec6
1359
1358
2024-05-29T22:35:42Z
Vrtnis
21
/* Add Darwin OP URDF */
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
e9d62f6cb35ee61958ea80076a7fc312ee723c27
1360
1359
2024-05-29T22:43:50Z
Vrtnis
21
/*Add INRIA Poppy Humanoid*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
4bd7515b010f82daacaffc2222526372e67f7c5f
1361
1360
2024-05-29T22:46:11Z
Vrtnis
21
/*Make sortable*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
c24af55b0b161b035b33da1f972040d13c7e24fd
1362
1361
2024-05-29T22:54:45Z
Vrtnis
21
/*Add Kengoro and SURALP*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
12dc09409bfbb8fc97f6af0c436ae5cef9f62c4b
1371
1362
2024-05-31T05:22:25Z
Vrtnis
21
/* Kengoro*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
cee333b4d2b81b44a1dfae4170f0fb7fe724f2e9
1372
1371
2024-05-31T05:25:34Z
Vrtnis
21
/* ANYmal
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
eb1ee085e56e0b6e75574f3664e1b1df0b5e7753
1373
1372
2024-05-31T05:30:23Z
Vrtnis
21
/*MIR */
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 34 || MIR-Lola || Munich Institute of Robotics and Machine Intelligence || URDF || [https://github.com/mir-lab/lola_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
562afa83e686f21d7d80858255086219f39baa71
1374
1373
2024-05-31T05:34:14Z
Vrtnis
21
/*HSR and Pepper2*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 34 || MIR-Lola || Munich Institute of Robotics and Machine Intelligence || URDF || [https://github.com/mir-lab/lola_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 35 || HSR || Toyota || URDF || [https://github.com/toyota-research-institute/hsr_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 36 || Pepper 2 || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
c3d2994b5bbb41ceeaa05150d5b0d41f3c66a023
K-Scale Lecture Circuit
0
299
1363
2024-05-29T23:05:16Z
Ben
2
Created page with "Lecture Circuit {| class="wikitable" |- ! Date ! Presenter ! Topic ! Link |- | 2024.06.07 | | | |- | 2024.06.06 | | | |- | 2024.06.05 | | | |- | 2024.06.04 | | | |- | 2024.06..."
wikitext
text/x-wiki
Lecture Circuit
{| class="wikitable"
|-
! Date
! Presenter
! Topic
! Link
|-
| 2024.06.07
|
|
|
|-
| 2024.06.06
|
|
|
|-
| 2024.06.05
|
|
|
|-
| 2024.06.04
|
|
|
|-
| 2024.06.03
|
|
|
|-
| 2024.05.31
| Dennis
| Speech representation learning papers
|
|-
| 2024.05.30
| Isaac
| VLMs
|
|-
| 2024.05.29
| Allen
| PPO
|
|}
0e57ffa05ae7fe66e221ccbf8a1c314bb1b3be22
1364
1363
2024-05-29T23:06:14Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
! Link
|-
| 2024.06.07
| Ben
| Quantization
|
|-
| 2024.06.06
|
|
|
|-
| 2024.06.05
|
|
|
|-
| 2024.06.04
|
|
|
|-
| 2024.06.03
|
|
|
|-
| 2024.05.31
| Dennis
| Speech representation learning papers
|
|-
| 2024.05.30
| Isaac
| VLMs
|
|-
| 2024.05.29
| Allen
| PPO
|
|}
e00c98e54afe0b4791963b872c7f3fae3a4a2950
1365
1364
2024-05-29T23:07:44Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
! Link
|-
| 2024.06.07
| Ben
| Quantization
|
|-
| 2024.06.06
|
|
|
|-
| 2024.06.05
|
|
|
|-
| 2024.06.04
|
|
|
|-
| 2024.06.03
|
|
|
|-
| 2024.05.31
| Dennis
| Speech representation learning papers
|
|-
| 2024.05.30
| Isaac
| VLMs
|
|-
| 2024.05.29
| Allen
| PPO
|
|}
[[Category: K-Scale]]
56d1cd595b186705626f2e50768a846a7c77ff60
1367
1365
2024-05-29T23:17:44Z
Qdot.me
32
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
! Link
|-
| 2024.06.07
| Ben
| Quantization
|
|-
| 2024.06.06
| Tom
| Linux Raw
|
|-
| 2024.06.05
|
|
|
|-
| 2024.06.04
|
|
|
|-
| 2024.06.03
|
|
|
|-
| 2024.05.31
| Dennis
| Speech representation learning papers
|
|-
| 2024.05.30
| Isaac
| VLMs
|
|-
| 2024.05.29
| Allen
| PPO
|
|}
[[Category: K-Scale]]
ae0fc2cedc89861af9ed407dbdd0341add56cdd7
1368
1367
2024-05-29T23:25:30Z
136.62.52.52
0
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
! Link
|-
| 2024.06.07
| Ben
| Quantization
|
|-
| 2024.06.06
| Tom
| Linux Raw
|
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|
|-
| 2024.06.04
|
|
|
|-
| 2024.06.03
|
|
|
|-
| 2024.05.31
| Dennis
| Speech representation learning papers
|
|-
| 2024.05.30
| Isaac
| VLMs
|
|-
| 2024.05.29
| Allen
| PPO
|
|}
[[Category: K-Scale]]
03482e9b2a91b342429121a1a8fa6972d8be86b2
1369
1368
2024-05-31T03:30:17Z
Budzianowski
19
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
! Link
|-
| 2024.06.07
| Ben
| Quantization
|
|-
| 2024.06.06
| Tom
| Linux Raw
|
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|
|-
| 2024.06.04
| Paweł
| What I want to believe in
|
|-
| 2024.06.03
|
|
|
|-
| 2024.05.31
| Dennis
| Speech representation learning papers
|
|-
| 2024.05.30
| Isaac
| VLMs
|
|-
| 2024.05.29
| Allen
| PPO
|
|}
[[Category: K-Scale]]
2c4e306f539fd6be13881762eebe857f33003283
1370
1369
2024-05-31T03:56:48Z
Budzianowski
19
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
! Link
|-
| 2024.06.07
| Ben
| Quantization
|
|-
| 2024.06.06
| Tom
| Linux Raw
|
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|
|-
| 2024.06.04
| Paweł
| What I (want to) believe in
|
|-
| 2024.06.03
|
|
|
|-
| 2024.05.31
| Dennis
| Speech representation learning papers
|
|-
| 2024.05.30
| Isaac
| VLMs
|
|-
| 2024.05.29
| Allen
| PPO
|
|}
[[Category: K-Scale]]
a1b587162ae840351b83c57cfbbcad8a2e2ffe68
1399
1370
2024-05-31T18:59:59Z
Budzianowski
19
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
! Link
|-
| Add next one
|
|
|-
| 2024.06.07
| Ben
| Quantization
|
|-
| 2024.06.06
| Tom
| Linux Raw
|
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|
|-
| 2024.06.04
| Paweł
| What I (want to) believe in
|
|-
| 2024.06.03
|
|
|
|-
| 2024.05.31
| Dennis
| Speech representation learning papers
|
|-
| 2024.05.30
| Isaac
| VLMs
|
|-
| 2024.05.29
| Allen
| PPO
|
|}
[[Category: K-Scale]]
da37183fd44ae5809f8077010501d271e0dad40f
User:Qdot.me
2
300
1366
2024-05-29T23:16:37Z
Qdot.me
32
Created page with "Tom Mloduchowski - Firmware @ KScale [http://qdot.me qdot.me]"
wikitext
text/x-wiki
Tom Mloduchowski - Firmware @ KScale
[http://qdot.me qdot.me]
24d97c8b23275b53770049cccf4ae44550b5ed93
Humanoid Robots Applications
0
301
1375
2024-05-31T05:40:56Z
Vrtnis
21
Created page with "== Research and Education == '''Research and Educational Applications''' === Educational Tool === * '''STEM Education:''' Provides hands-on learning experiences in robotics,..."
wikitext
text/x-wiki
== Research and Education ==
'''Research and Educational Applications'''
=== Educational Tool ===
* '''STEM Education:''' Provides hands-on learning experiences in robotics, programming, and engineering.
* '''Interactive Learning:''' Engages students in interactive projects, enhancing understanding of robotic systems and control algorithms.
7a5585a3f15d5f74c07d41da7bce6b1ee82cca99
1376
1375
2024-05-31T05:42:36Z
Vrtnis
21
wikitext
text/x-wiki
== Research and Education ==
'''Research and Educational Applications'''
=== Research Applications ===
* '''Human-Robot Interaction Studies:''' Used to study interaction patterns, improving user experience and safety in collaborative environments.
* '''Algorithm Development:''' Serves as a testbed for developing and refining robotics algorithms, including motion planning and manipulation.
* '''AI and Machine Learning:''' Facilitates research in AI-driven robotics, enabling experiments with machine learning models for autonomous behaviors.
=== Potential Projects and Experiments ===
* '''Robotic Path Planning:''' Developing and testing pathfinding algorithms in dynamic environments.
* '''Manipulation Tasks:''' Experimenting with different grasping and manipulation techniques using the robot’s end effectors.
* '''Sensor Integration:''' Integrating and testing various sensors to enhance robot perception and decision-making.
=== Educational Tool ===
* '''STEM Education:''' Provides hands-on learning experiences in robotics, programming, and engineering.
* '''Interactive Learning:''' Engages students in interactive projects, enhancing understanding of robotic systems and control algorithms.
1e6c41a63011cf3eda78b3f4620277394ab5d4b3
1377
1376
2024-05-31T05:44:06Z
Vrtnis
21
wikitext
text/x-wiki
== Research and Education ==
'''Research and Educational Applications'''
=== Research Applications ===
* '''Human-Robot Interaction Studies:''' Used to study interaction patterns, improving user experience and safety in collaborative environments.
* '''Algorithm Development:''' Serves as a testbed for developing and refining robotics algorithms, including motion planning and manipulation.
* '''AI and Machine Learning:''' Facilitates research in AI-driven robotics, enabling experiments with machine learning models for autonomous behaviors.
=== Potential Projects and Experiments ===
* '''Robotic Path Planning:''' Developing and testing pathfinding algorithms in dynamic environments.
* '''Manipulation Tasks:''' Experimenting with different grasping and manipulation techniques using the robot’s end effectors.
* '''Sensor Integration:''' Integrating and testing various sensors to enhance robot perception and decision-making.
=== Educational Tool ===
* '''STEM Education:''' Provides hands-on learning experiences in robotics, programming, and engineering.
* '''Interactive Learning:''' Engages students in interactive projects, enhancing understanding of robotic systems and control algorithms.
----
== Consumer Applications ==
'''Home and Personal Assistance Applications'''
=== Household Chores ===
* '''Task Performance:''' Assists with household tasks such as picking up items, organizing spaces, and operating household appliances.
* '''Customizable Routines:''' Programmed to perform specific routines tailored to individual needs, increasing convenience and efficiency.
b8aa6a51150b3989af2fab4ee95d45486ee29c00
1378
1377
2024-05-31T05:44:32Z
Vrtnis
21
wikitext
text/x-wiki
== Research and Education ==
'''Research and Educational Applications'''
=== Research Applications ===
* '''Human-Robot Interaction Studies:''' Used to study interaction patterns, improving user experience and safety in collaborative environments.
* '''Algorithm Development:''' Serves as a testbed for developing and refining robotics algorithms, including motion planning and manipulation.
* '''AI and Machine Learning:''' Facilitates research in AI-driven robotics, enabling experiments with machine learning models for autonomous behaviors.
=== Potential Projects and Experiments ===
* '''Robotic Path Planning:''' Developing and testing pathfinding algorithms in dynamic environments.
* '''Manipulation Tasks:''' Experimenting with different grasping and manipulation techniques using the robot’s end effectors.
* '''Sensor Integration:''' Integrating and testing various sensors to enhance robot perception and decision-making.
=== Educational Tool ===
* '''STEM Education:''' Provides hands-on learning experiences in robotics, programming, and engineering.
* '''Interactive Learning:''' Engages students in interactive projects, enhancing understanding of robotic systems and control algorithms.
----
== Consumer Applications ==
'''Home and Personal Assistance Applications'''
=== Household Chores ===
* '''Task Performance:''' Assists with household tasks such as picking up items, organizing spaces, and operating household appliances.
* '''Customizable Routines:''' Programmed to perform specific routines tailored to individual needs, increasing convenience and efficiency.
=== Assistance for Individuals ===
* '''Enhanced Independence:''' Provides physical assistance, enabling individuals with mobility impairments to perform daily tasks.
* '''Remote Monitoring and Control:''' Allows caregivers to remotely monitor and control the robot, ensuring timely assistance and support.
4a0f8031b3031eecdb48094d329752badd0c0790
1379
1378
2024-05-31T05:44:56Z
Vrtnis
21
/* Consumer Applications */
wikitext
text/x-wiki
== Research and Education ==
'''Research and Educational Applications'''
=== Research Applications ===
* '''Human-Robot Interaction Studies:''' Used to study interaction patterns, improving user experience and safety in collaborative environments.
* '''Algorithm Development:''' Serves as a testbed for developing and refining robotics algorithms, including motion planning and manipulation.
* '''AI and Machine Learning:''' Facilitates research in AI-driven robotics, enabling experiments with machine learning models for autonomous behaviors.
=== Potential Projects and Experiments ===
* '''Robotic Path Planning:''' Developing and testing pathfinding algorithms in dynamic environments.
* '''Manipulation Tasks:''' Experimenting with different grasping and manipulation techniques using the robot’s end effectors.
* '''Sensor Integration:''' Integrating and testing various sensors to enhance robot perception and decision-making.
=== Educational Tool ===
* '''STEM Education:''' Provides hands-on learning experiences in robotics, programming, and engineering.
* '''Interactive Learning:''' Engages students in interactive projects, enhancing understanding of robotic systems and control algorithms.
----
== Consumer Applications ==
'''Home and Personal Assistance Applications'''
=== Household Chores ===
* '''Task Performance:''' Assists with household tasks such as picking up items, organizing spaces, and operating household appliances.
* '''Customizable Routines:''' Programmed to perform specific routines tailored to individual needs, increasing convenience and efficiency.
=== Assistance for Individuals ===
* '''Enhanced Independence:''' Provides physical assistance, enabling individuals with mobility impairments to perform daily tasks.
* '''Remote Monitoring and Control:''' Allows caregivers to remotely monitor and control the robot, ensuring timely assistance and support.
=== Quality of Life Improvement ===
* '''Companionship and Interaction:''' Offers social interaction and companionship, improving mental well-being.
* '''Safety and Security:''' Monitors home environments for safety hazards and can alert caregivers or emergency services if needed.
* '''Adaptive Learning:''' Learns and adapts to user preferences and routines, providing personalized assistance over time.
59abbe009dc6bb1d33f6ad5e1dc2d47a7243ee59
1380
1379
2024-05-31T05:45:04Z
Vrtnis
21
/* Consumer Applications */
wikitext
text/x-wiki
== Research and Education ==
'''Research and Educational Applications'''
=== Research Applications ===
* '''Human-Robot Interaction Studies:''' Used to study interaction patterns, improving user experience and safety in collaborative environments.
* '''Algorithm Development:''' Serves as a testbed for developing and refining robotics algorithms, including motion planning and manipulation.
* '''AI and Machine Learning:''' Facilitates research in AI-driven robotics, enabling experiments with machine learning models for autonomous behaviors.
=== Potential Projects and Experiments ===
* '''Robotic Path Planning:''' Developing and testing pathfinding algorithms in dynamic environments.
* '''Manipulation Tasks:''' Experimenting with different grasping and manipulation techniques using the robot’s end effectors.
* '''Sensor Integration:''' Integrating and testing various sensors to enhance robot perception and decision-making.
=== Educational Tool ===
* '''STEM Education:''' Provides hands-on learning experiences in robotics, programming, and engineering.
* '''Interactive Learning:''' Engages students in interactive projects, enhancing understanding of robotic systems and control algorithms.
----
== Consumer Applications ==
'''Home and Personal Assistance Applications'''
=== Household Chores ===
* '''Task Performance:''' Assists with household tasks such as picking up items, organizing spaces, and operating household appliances.
* '''Customizable Routines:''' Programmed to perform specific routines tailored to individual needs, increasing convenience and efficiency.
=== Assistance for Individuals ===
* '''Enhanced Independence:''' Provides physical assistance, enabling individuals with mobility impairments to perform daily tasks.
* '''Remote Monitoring and Control:''' Allows caregivers to remotely monitor and control the robot, ensuring timely assistance and support.
=== Quality of Life Improvement ===
* '''Companionship and Interaction:''' Offers social interaction and companionship, improving mental well-being.
* '''Safety and Security:''' Monitors home environments for safety hazards and can alert caregivers or emergency services if needed.
* '''Adaptive Learning:''' Learns and adapts to user preferences and routines, providing personalized assistance over time.
7c47e439cc91bdcf34af3a8896d6c60e78556d37
1381
1380
2024-05-31T05:46:40Z
Vrtnis
21
wikitext
text/x-wiki
== Research and Education ==
'''Research and Educational Applications'''
=== Research Applications ===
* '''Human-Robot Interaction Studies:''' Used to study interaction patterns, improving user experience and safety in collaborative environments.
* '''Algorithm Development:''' Serves as a testbed for developing and refining robotics algorithms, including motion planning and manipulation.
* '''AI and Machine Learning:''' Facilitates research in AI-driven robotics, enabling experiments with machine learning models for autonomous behaviors.
=== Potential Projects and Experiments ===
* '''Robotic Path Planning:''' Developing and testing pathfinding algorithms in dynamic environments.
* '''Manipulation Tasks:''' Experimenting with different grasping and manipulation techniques using the robot’s end effectors.
* '''Sensor Integration:''' Integrating and testing various sensors to enhance robot perception and decision-making.
=== Educational Tool ===
* '''STEM Education:''' Provides hands-on learning experiences in robotics, programming, and engineering.
* '''Interactive Learning:''' Engages students in interactive projects, enhancing understanding of robotic systems and control algorithms.
----
== Consumer Applications ==
'''Home and Personal Assistance Applications'''
=== Household Chores ===
* '''Task Performance:''' Assists with household tasks such as picking up items, organizing spaces, and operating household appliances.
* '''Customizable Routines:''' Programmed to perform specific routines tailored to individual needs, increasing convenience and efficiency.
=== Assistance for Individuals ===
* '''Enhanced Independence:''' Provides physical assistance, enabling individuals with mobility impairments to perform daily tasks.
* '''Remote Monitoring and Control:''' Allows caregivers to remotely monitor and control the robot, ensuring timely assistance and support.
=== Quality of Life Improvement ===
* '''Companionship and Interaction:''' Offers social interaction and companionship, improving mental well-being.
* '''Safety and Security:''' Monitors home environments for safety hazards and can alert caregivers or emergency services if needed.
* '''Adaptive Learning:''' Learns and adapts to user preferences and routines, providing personalized assistance over time.
== Industrial Applications ==
'''Industrial Applications'''
=== Warehouse Logistics ===
* '''Task Automation:''' Automates repetitive tasks such as picking, packing, and sorting items.
* '''Efficient Navigation:''' Capable of navigating through warehouse aisles, reducing time and increasing accuracy in order fulfillment.
* '''Real-Time Inventory Management:''' Scans and updates inventory data, ensuring up-to-date stock levels and reducing discrepancies.
=== Material Handling ===
* '''Versatile End Effectors:''' Equipped with clamp end effectors for handling medium-sized boxes and containers.
* '''Safe Handling:''' Reduces the risk of damage to items by ensuring precise and careful movement.
* '''Enhanced Safety:''' Reduces the need for human workers to perform physically demanding and potentially hazardous tasks.
=== Benefits and Impact ===
* '''Increased Productivity:''' Faster and more accurate task completion boosts overall warehouse efficiency.
* '''Cost Savings:''' Automation reduces labor costs and minimizes errors.
* '''Scalability:''' Easily adaptable to different tasks and warehouse configurations.
fab916c8c5615ba1b893ad655b562c46c376246d
1382
1381
2024-05-31T05:47:07Z
Vrtnis
21
wikitext
text/x-wiki
== Research and Education ==
'''Research and Educational Applications'''
=== Research Applications ===
* '''Human-Robot Interaction Studies:''' Used to study interaction patterns, improving user experience and safety in collaborative environments.
* '''Algorithm Development:''' Serves as a testbed for developing and refining robotics algorithms, including motion planning and manipulation.
* '''AI and Machine Learning:''' Facilitates research in AI-driven robotics, enabling experiments with machine learning models for autonomous behaviors.
=== Potential Projects and Experiments ===
* '''Robotic Path Planning:''' Developing and testing pathfinding algorithms in dynamic environments.
* '''Manipulation Tasks:''' Experimenting with different grasping and manipulation techniques using the robot’s end effectors.
* '''Sensor Integration:''' Integrating and testing various sensors to enhance robot perception and decision-making.
=== Educational Tool ===
* '''STEM Education:''' Provides hands-on learning experiences in robotics, programming, and engineering.
* '''Interactive Learning:''' Engages students in interactive projects, enhancing understanding of robotic systems and control algorithms.
----
== Consumer Applications ==
'''Home and Personal Assistance Applications'''
=== Household Chores ===
* '''Task Performance:''' Assists with household tasks such as picking up items, organizing spaces, and operating household appliances.
* '''Customizable Routines:''' Programmed to perform specific routines tailored to individual needs, increasing convenience and efficiency.
=== Assistance for Individuals ===
* '''Enhanced Independence:''' Provides physical assistance, enabling individuals with mobility impairments to perform daily tasks.
* '''Remote Monitoring and Control:''' Allows caregivers to remotely monitor and control the robot, ensuring timely assistance and support.
=== Quality of Life Improvement ===
* '''Companionship and Interaction:''' Offers social interaction and companionship, improving mental well-being.
* '''Safety and Security:''' Monitors home environments for safety hazards and can alert caregivers or emergency services if needed.
* '''Adaptive Learning:''' Learns and adapts to user preferences and routines, providing personalized assistance over time.
----
== Industrial Applications ==
'''Industrial Applications'''
=== Warehouse Logistics ===
* '''Task Automation:''' Automates repetitive tasks such as picking, packing, and sorting items.
* '''Efficient Navigation:''' Capable of navigating through warehouse aisles, reducing time and increasing accuracy in order fulfillment.
* '''Real-Time Inventory Management:''' Scans and updates inventory data, ensuring up-to-date stock levels and reducing discrepancies.
=== Material Handling ===
* '''Versatile End Effectors:''' Equipped with clamp end effectors for handling medium-sized boxes and containers.
* '''Safe Handling:''' Reduces the risk of damage to items by ensuring precise and careful movement.
* '''Enhanced Safety:''' Reduces the need for human workers to perform physically demanding and potentially hazardous tasks.
=== Benefits and Impact ===
* '''Increased Productivity:''' Faster and more accurate task completion boosts overall warehouse efficiency.
* '''Cost Savings:''' Automation reduces labor costs and minimizes errors.
* '''Scalability:''' Easily adaptable to different tasks and warehouse configurations.
6617af73fef27d5826b9e2a18bed158357fe5206
1383
1382
2024-05-31T05:47:27Z
Vrtnis
21
wikitext
text/x-wiki
== Research and Education ==
'''Research and Educational Applications'''
=== Research Applications ===
* '''Human-Robot Interaction Studies:''' Used to study interaction patterns, improving user experience and safety in collaborative environments.
* '''Algorithm Development:''' Serves as a testbed for developing and refining robotics algorithms, including motion planning and manipulation.
* '''AI and Machine Learning:''' Facilitates research in AI-driven robotics, enabling experiments with machine learning models for autonomous behaviors.
=== Potential Projects and Experiments ===
* '''Robotic Path Planning:''' Developing and testing pathfinding algorithms in dynamic environments.
* '''Manipulation Tasks:''' Experimenting with different grasping and manipulation techniques using the robot’s end effectors.
* '''Sensor Integration:''' Integrating and testing various sensors to enhance robot perception and decision-making.
=== Educational Tool ===
* '''STEM Education:''' Provides hands-on learning experiences in robotics, programming, and engineering.
* '''Interactive Learning:''' Engages students in interactive projects, enhancing understanding of robotic systems and control algorithms.
----
== Consumer Applications ==
'''Home and Personal Assistance Applications'''
=== Household Chores ===
* '''Task Performance:''' Assists with household tasks such as picking up items, organizing spaces, and operating household appliances.
* '''Customizable Routines:''' Programmed to perform specific routines tailored to individual needs, increasing convenience and efficiency.
=== Assistance for Individuals ===
* '''Enhanced Independence:''' Provides physical assistance, enabling individuals with mobility impairments to perform daily tasks.
* '''Remote Monitoring and Control:''' Allows caregivers to remotely monitor and control the robot, ensuring timely assistance and support.
=== Quality of Life Improvement ===
* '''Companionship and Interaction:''' Offers social interaction and companionship, improving mental well-being.
* '''Safety and Security:''' Monitors home environments for safety hazards and can alert caregivers or emergency services if needed.
* '''Adaptive Learning:''' Learns and adapts to user preferences and routines, providing personalized assistance over time.
----
== Industrial Applications ==
'''Industrial Applications'''
=== Warehouse Logistics ===
* '''Task Automation:''' Automates repetitive tasks such as picking, packing, and sorting items.
* '''Efficient Navigation:''' Capable of navigating through warehouse aisles, reducing time and increasing accuracy in order fulfillment.
* '''Real-Time Inventory Management:''' Scans and updates inventory data, ensuring up-to-date stock levels and reducing discrepancies.
=== Material Handling ===
* '''Versatile End Effectors:''' Equipped with clamp end effectors for handling medium-sized boxes and containers.
* '''Safe Handling:''' Reduces the risk of damage to items by ensuring precise and careful movement.
* '''Enhanced Safety:''' Reduces the need for human workers to perform physically demanding and potentially hazardous tasks.
=== Benefits and Impact ===
* '''Increased Productivity:''' Faster and more accurate task completion boosts overall warehouse efficiency.
* '''Cost Savings:''' Automation reduces labor costs and minimizes errors.
* '''Scalability:''' Easily adaptable to different tasks and warehouse configurations.
f56e24ab4a1b05b5eab62e79d6c658ef4f9d8cfa
Robot Web Viewer
0
302
1384
2024-05-31T06:03:45Z
Vrtnis
21
Created page with "= Robot Web Viewer = == Overview == The **Robot Web Viewer** provides a web-based interface for viewing and interacting with URDF (Unified Robot Description Format) files. Th..."
wikitext
text/x-wiki
= Robot Web Viewer =
== Overview ==
The **Robot Web Viewer** provides a web-based interface for viewing and interacting with URDF (Unified Robot Description Format) files. The viewer is built using **Reactjs**, **three.js**, and **urdf-loader**.
== Features ==
* **Web-based URDF Viewer**: Allows users to load and visualize URDF files directly in their web browser.
* **Simplified Meshes**: Supports simplified meshes for efficient rendering.
* **Interactive Controls**: Users can interact with the 3D models using standard web-based controls.
* **Easy Setup and Deployment**: The project includes scripts for easy local setup and deployment.
== Setup Instructions ==
To set up the Robot Web Viewer locally, follow these steps:
=== Clone the Repository ===
```bash
$ git clone https://github.com/vrtnis/robot-web-viewer.git
68a6ce0344b102503abf700ff04ab90b35dbbd4b
1385
1384
2024-05-31T06:03:54Z
Vrtnis
21
wikitext
text/x-wiki
The **Robot Web Viewer** provides a web-based interface for viewing and interacting with URDF (Unified Robot Description Format) files. The viewer is built using **Reactjs**, **three.js**, and **urdf-loader**.
== Features ==
* **Web-based URDF Viewer**: Allows users to load and visualize URDF files directly in their web browser.
* **Simplified Meshes**: Supports simplified meshes for efficient rendering.
* **Interactive Controls**: Users can interact with the 3D models using standard web-based controls.
* **Easy Setup and Deployment**: The project includes scripts for easy local setup and deployment.
== Setup Instructions ==
To set up the Robot Web Viewer locally, follow these steps:
=== Clone the Repository ===
```bash
$ git clone https://github.com/vrtnis/robot-web-viewer.git
eea93541ac9cfad9a73a783ec4903a233e4a0a38
1386
1385
2024-05-31T06:04:18Z
Vrtnis
21
wikitext
text/x-wiki
The '''Robot Web Viewer''' provides a web-based interface for viewing and interacting with URDF (Unified Robot Description Format) files. The viewer is built using **Reactjs**, **three.js**, and **urdf-loader**.
== Features ==
* **Web-based URDF Viewer**: Allows users to load and visualize URDF files directly in their web browser.
* **Simplified Meshes**: Supports simplified meshes for efficient rendering.
* **Interactive Controls**: Users can interact with the 3D models using standard web-based controls.
* **Easy Setup and Deployment**: The project includes scripts for easy local setup and deployment.
== Setup Instructions ==
To set up the Robot Web Viewer locally, follow these steps:
=== Clone the Repository ===
```bash
$ git clone https://github.com/vrtnis/robot-web-viewer.git
a9c647992c0458729d02136ebbd83ff1fc3a11dd
1387
1386
2024-05-31T06:05:25Z
Vrtnis
21
wikitext
text/x-wiki
The '''Robot Web Viewer''' provides a web-based interface for viewing and interacting with URDF (Unified Robot Description Format) files. The viewer is built using **Reactjs**, **three.js**, and **urdf-loader**.
== Features ==
* '''Web-based URDF Viewer''': Allows users to load and visualize URDF files directly in their web browser.
* '''Simplified Meshes''': Supports simplified meshes for efficient rendering.
* '''Interactive Controls''': Users can interact with the 3D models using standard web-based controls.
* '''Easy Setup and Deployment''': The project includes scripts for easy local setup and deployment.
== Setup Instructions ==
To set up the Robot Web Viewer locally, follow these steps:
=== Clone the Repository ===
```bash
$ git clone https://github.com/vrtnis/robot-web-viewer.git
7b639661c75de418b87eb172b99e311a47f05c1d
1388
1387
2024-05-31T06:05:58Z
Vrtnis
21
wikitext
text/x-wiki
The '''Robot Web Viewer''' provides a web-based interface for viewing and interacting with URDF (Unified Robot Description Format) files. The viewer is built using '''Reactjs''', '''three.js''', and '''urdf-loader'''.
== Features ==
* '''Web-based URDF Viewer''': Allows users to load and visualize URDF files directly in their web browser.
* '''Simplified Meshes''': Supports simplified meshes for efficient rendering.
* '''Interactive Controls''': Users can interact with the 3D models using standard web-based controls.
* '''Easy Setup and Deployment''': The project includes scripts for easy local setup and deployment.
== Setup Instructions ==
To set up the Robot Web Viewer locally, follow these steps:
=== Clone the Repository ===
```bash
$ git clone https://github.com/vrtnis/robot-web-viewer.git
aa9efb1c8bd72382d84fcc99390156522c4f8698
1389
1388
2024-05-31T06:07:06Z
Vrtnis
21
/* Clone the Repository */
wikitext
text/x-wiki
The '''Robot Web Viewer''' provides a web-based interface for viewing and interacting with URDF (Unified Robot Description Format) files. The viewer is built using '''Reactjs''', '''three.js''', and '''urdf-loader'''.
== Features ==
* '''Web-based URDF Viewer''': Allows users to load and visualize URDF files directly in their web browser.
* '''Simplified Meshes''': Supports simplified meshes for efficient rendering.
* '''Interactive Controls''': Users can interact with the 3D models using standard web-based controls.
* '''Easy Setup and Deployment''': The project includes scripts for easy local setup and deployment.
== Setup Instructions ==
To set up the Robot Web Viewer locally, follow these steps:
=== Clone the Repository ===
<syntaxhighlight lang="bash">
$ git clone https://github.com/vrtnis/robot-web-viewer.git
</syntaxhighlight>
8fc43c277dc68a214e2958501eb379854fc305df
1391
1389
2024-05-31T06:12:58Z
Vrtnis
21
wikitext
text/x-wiki
[[File:Stompy web viewer.gif|frame|200px|left]]
The '''Robot Web Viewer''' provides a web-based interface for viewing and interacting with URDF (Unified Robot Description Format) files. The viewer is built using '''Reactjs''', '''three.js''', and '''urdf-loader'''.
== Features ==
* '''Web-based URDF Viewer''': Allows users to load and visualize URDF files directly in their web browser.
* '''Simplified Meshes''': Supports simplified meshes for efficient rendering.
* '''Interactive Controls''': Users can
interact with the 3D models using standard web-based controls.
* '''Easy Setup and Deployment''': The project includes scripts for easy local setup and deployment.
== Setup Instructions ==
To set up the Robot Web Viewer locally, follow these steps:
=== Clone the Repository ===
<syntaxhighlight lang="bash">
$ git clone https://github.com/vrtnis/robot-web-viewer.git
</syntaxhighlight>
bac3695c43d889a8addd72e9715999fe6517ef83
1392
1391
2024-05-31T06:14:59Z
Vrtnis
21
wikitext
text/x-wiki
[[File:Stompy web viewer.gif|frame|200px|left]]
The '''Robot Web Viewer''' provides a web-based interface for viewing and interacting with URDF (Unified Robot Description Format) files. The viewer is built using '''Reactjs''', '''three.js''', and '''urdf-loader'''.
== Features ==
* '''Web-based URDF Viewer''': Allows users to load and visualize URDF files directly in their web browser.
* '''Simplified Meshes''': Supports simplified meshes for efficient rendering.
* '''Interactive Controls''': Users can
interact with the 3D models using standard web-based controls.
* '''Easy Setup and Deployment''': The project includes scripts for easy local setup and deployment.
== Setup Instructions ==
To set up the Robot Web Viewer locally, follow these steps:
=== Clone the Repository ===
<syntaxhighlight lang="bash">
$ git clone https://github.com/vrtnis/robot-web-viewer.git
</syntaxhighlight>
=== Setup ===
<syntaxhighlight lang="bash">
$ git clone https://github.com/vrtnis/robot-web-viewer.git
$ cd robot-web-viewer/
$ yarn install
</syntaxhighlight>
ff1a4b8fa15f51c63b53362ea0e9d78c213f54d2
1393
1392
2024-05-31T06:17:01Z
Vrtnis
21
wikitext
text/x-wiki
[[File:Stompy web viewer.gif|frame|200px|left]]
The '''Robot Web Viewer''' provides a web-based interface for viewing and interacting with URDF (Unified Robot Description Format) files. The viewer is built using '''Reactjs''', '''three.js''', and '''urdf-loader'''.
== Features ==
* '''Web-based URDF Viewer''': Allows users to load and visualize URDF files directly in their web browser.
* '''Simplified Meshes''': Supports simplified meshes for efficient rendering.
* '''Interactive Controls''': Users can
interact with the 3D models using standard web-based controls.
* '''Easy Setup and Deployment''': The project includes scripts for easy local setup and deployment.
== Setup Instructions ==
To set up the Robot Web Viewer locally, follow these steps:
=== Clone the Repository ===
<syntaxhighlight lang="bash">
$ git clone https://github.com/vrtnis/robot-web-viewer.git
</syntaxhighlight>
=== Setup ===
<syntaxhighlight lang="bash">
$ git clone https://github.com/vrtnis/robot-web-viewer.git
$ cd robot-web-viewer/
$ yarn install
</syntaxhighlight>
=== Run ===
<syntaxhighlight lang="bash">
$ yarn start
</syntaxhighlight>
0b0f8ea0eaf1fd05c0c0aa6126a723f38603a7ee
1394
1393
2024-05-31T06:19:21Z
Vrtnis
21
wikitext
text/x-wiki
[[File:Stompy web viewer.gif|frame|200px|left]]
The '''Robot Web Viewer''' provides a web-based interface for viewing and interacting with URDF (Unified Robot Description Format) files. The viewer is built using '''Reactjs''', '''three.js''', and '''urdf-loader''' and has been forked and completely updated for use with the latest stable dependencies.
[https://github.com/vrtnis/robot-web-viewer https://github.com/vrtnis/robot-web-viewer]
== Features ==
* '''Web-based URDF Viewer''': Allows users to load and visualize URDF files directly in their web browser.
* '''Simplified Meshes''': Supports simplified meshes for efficient rendering.
* '''Interactive Controls''': Users can
interact with the 3D models using standard web-based controls.
* '''Easy Setup and Deployment''': The project includes scripts for easy local setup and deployment.
== Setup Instructions ==
To set up the Robot Web Viewer locally, follow these steps:
=== Clone the Repository ===
<syntaxhighlight lang="bash">
$ git clone https://github.com/vrtnis/robot-web-viewer.git
</syntaxhighlight>
=== Setup ===
<syntaxhighlight lang="bash">
$ git clone https://github.com/vrtnis/robot-web-viewer.git
$ cd robot-web-viewer/
$ yarn install
</syntaxhighlight>
=== Run ===
<syntaxhighlight lang="bash">
$ yarn start
</syntaxhighlight>
d4e4ab6fb1826ccbddd11404670ef048524353d0
File:Stompy web viewer.gif
6
303
1390
2024-05-31T06:10:51Z
Vrtnis
21
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Main Page
0
1
1395
1355
2024-05-31T16:28:35Z
Budzianowski
19
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Physical Intelligence]]
|
|-
| [[Skild]]
|
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
94bc0861059a9da6746d6fb2d4ec8a0dde1b50e6
1396
1395
2024-05-31T16:30:40Z
Budzianowski
19
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
dc0713097deef34fe07ec58b19eecbf397abd309
1397
1396
2024-05-31T16:33:27Z
Budzianowski
19
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Humanoid Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Xpeng]]
| [[PX5]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[Proxy]]
|
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
ee4a335a41b0db2b589bbba475aa59955a96d22e
1398
1397
2024-05-31T16:35:09Z
Budzianowski
19
/* List of Humanoid Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
830cbdbe875817b090703b80231c1e3d5e908860
K-Scale Weekly Progress Updates
0
294
1400
1346
2024-05-31T19:02:17Z
108.211.178.220
0
wikitext
text/x-wiki
{| class="wikitable"
|-
! Link
|-
| [https://twitter.com/kscalelabs/status/1788968705378181145 2024.05.10]
|-
| [https://x.com/kscalelabs/status/1791507358780461496 2024.05.17]
|-
| [https://x.com/kscalelabs/status/1794109131214712914 2024.05.24]
|-
| [https://x.com/kscalelabs/status/1796617681455775944 2024.05.31]
|}
[[Category:K-Scale]]
1c7682457870cd831a81feb76d073e0af268e03f
Stompy
0
2
1401
1071
2024-06-01T04:25:54Z
Dymaxion
22
wikitext
text/x-wiki
[[File:Stompy.jpg|right|300px|thumb]]
{{infobox robot
| name = Stompy
| organization = [[K-Scale Labs]]
| cost = USD 10,000
}}
Stompy is an open-source humanoid robot developed by [[K-Scale Labs]]. Here are some relevant links:
* [[Stompy To-Do List]]
* [[Stompy Build Guide]]
* [[Gripper History]]
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
=== Conventions ===
The images below show our pin convention for the CAN bus when using various connectors.
<gallery>
Kscale db9 can bus convention.jpg
Kscale phoenix can bus convention.jpg
</gallery>
= Simulation =
For the latest simulation artifacts, see [https://kscale.dev/ the website].
= Artwork =
Here's some art of Stompy!
<gallery>
Stompy 1.png
Stompy 2.png
Stompy 3.png
Stompy 4.png
</gallery>
[[Category:Robots]]
[[Category:Open Source]]
[[Category:K-Scale]]
ff7854d1e249f1aa6d72cfaa47f51d43aed4b25d
Gripper History
0
304
1402
2024-06-01T04:26:16Z
Dymaxion
22
Created page with "==Gripper Design Iterations== Original Design: [[File:Gripper 0.jpg|thumb|The non-optimized gripper based on the UMI study]] Problems: Too flexible at fingertips, too stiff in..."
wikitext
text/x-wiki
==Gripper Design Iterations==
Original Design:
[[File:Gripper 0.jpg|thumb|The non-optimized gripper based on the UMI study]]
Problems: Too flexible at fingertips, too stiff in inner surface
Updated version 1:
[[File:Gripper version 1.jpg|thumb|Updated version 1]]
Made ribs thinner at their ends so that they would bend more easily
Added triangular cuts along the inner surface to allow the material to flex
Problems: The triangular cut closest to the tip caused the inner surface of the grippers to tear
Updated version 2:
[[File:Stompy gripper 2.jpg|thumb|Updated Version 2]]
Moved slots to only align with the three thickest ribs, reducing potential for tears
Created an angled profile where the tips of the grippers are narrower than the base
Problems: Screw holes were too large for screws to be inserted from below, tears persisted in some cases
Updated version 3:
[[File:Stompy gripper version 3.jpg|thumb|Updated version 3]]
Decreased hole size to allow for better threading
Problems: tears persisted, insufficient clearance for hex key to actually tighten the screws from above (necessary for use with slide rail)
Updated version 4:
[[File:Gripper version 4.jpg|thumb|Updated version 4]]
Decreased profile of rib material to make screw installation process easier, decreased cross section of individual ribs to allow them to bend more
Problems: too flimsy in general, and particularly at tips
Updated version 5:
[[File:Updated version 5.jpg|thumb|Updated version 5]]
Restored rib material and eliminated one triangular cut to solve tearing issue, attempted to print entirely without supports
Problems: poor print quality due to lack of supports
Updated version 6:
Used solid tip and fewer ribs in the other section.
Assembly with screws remained awkward
Updated version 7:
Decreased profile of rib material to make screw installation process easier
34b68c7d70de79faae6f2aea040e6375977afc43
File:Gripper version 1.jpg
6
305
1403
2024-06-01T04:28:22Z
Dymaxion
22
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Gripper 0.jpg
6
306
1404
2024-06-01T04:30:14Z
Dymaxion
22
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Stompy gripper 2.jpg
6
307
1405
2024-06-01T04:32:24Z
Dymaxion
22
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Stompy gripper version 3.jpg
6
308
1406
2024-06-01T04:34:23Z
Dymaxion
22
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Gripper version 4.jpg
6
309
1407
2024-06-01T04:35:01Z
Dymaxion
22
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Updated version 5.jpg
6
310
1408
2024-06-01T04:35:37Z
Dymaxion
22
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Gripper History
0
304
1409
1402
2024-06-01T04:42:37Z
Dymaxion
22
/* Gripper Design Iterations */
wikitext
text/x-wiki
==Gripper Design Iterations==
Original Design:
[[File:Gripper 0.jpg|thumb|The non-optimized gripper based on the UMI study]]
Problems: Too flexible at fingertips, too stiff in inner surface
Updated version 1:
[[File:Gripper version 1.jpg|thumb|Updated version 1]]
Made ribs thinner at their ends so that they would bend more easily
Added triangular cuts along the inner surface to allow the material to flex
Problems: The triangular cut closest to the tip caused the inner surface of the grippers to tear
Updated version 2:
[[File:Stompy gripper 2.jpg|thumb|Updated Version 2]]
Moved slots to only align with the three thickest ribs, reducing potential for tears
Created an angled profile where the tips of the grippers are narrower than the base
Problems: Screw holes were too large for screws to be inserted from below, tears persisted in some cases
Updated version 3:
[[File:Stompy gripper version 3.jpg|thumb|Updated version 3]]
Decreased hole size to allow for better threading
Problems: tears persisted, insufficient clearance for hex key to actually tighten the screws from above (necessary for use with slide rail)
Updated version 4:
[[File:Gripper version 4.jpg|thumb|Updated version 4]]
Decreased profile of rib material to make screw installation process easier, decreased cross section of individual ribs to allow them to bend more
Problems: too flimsy in general, and particularly at tips
Updated version 5:
[[File:Updated version 5.jpg|thumb|Updated version 5]]
Restored rib material and eliminated one triangular cut to solve tearing issue, attempted to print entirely without supports
Problems: poor print quality due to lack of supports
Updated version 6:
Used solid tip and fewer ribs in the other section.
Problems: Tip will not bend around objects
Assembly with screws remained awkward
Updated version 7:
Decreased profile of rib material to make screw installation process easier
30e02f43adb0c63a56cc0399234bf3a0ea5d515e
1410
1409
2024-06-01T05:03:39Z
Dymaxion
22
wikitext
text/x-wiki
== Gripper Design Iterations ==
=== Original Design ===
Problems: Too flexible at fingertips, too stiff in inner surface.
[[File:Gripper 0.jpg|none|300px|The non-optimized gripper based on the UMI study]]
=== Updated Version 1 ===
Made ribs thinner at their ends so that they would bend more easily. Added triangular cuts along the inner surface to allow the material to flex.
Problems: The triangular cut closest to the tip caused the inner surface of the grippers to tear.
[[File:Gripper version 1.jpg|none|300px|Updated version 1]]
=== Updated Version 2 ===
Moved slots to only align with the three thickest ribs, reducing potential for tears. Created an angled profile where the tips of the grippers are narrower than the base.
Problems: Screw holes were too large for screws to be inserted from below, tears persisted in some cases.
[[File:Stompy gripper 2.jpg|none|300px|Updated Version 2]]
=== Updated Version 3 ===
Decreased hole size to allow for better threading.
Problems: Tears persisted, insufficient clearance for hex key to actually tighten the screws from above (necessary for use with slide rail).
[[File:Stompy gripper version 3.jpg|none|300px|Updated version 3]]
=== Updated Version 4 ===
Decreased profile of rib material to make screw installation process easier, decreased cross section of individual ribs to allow them to bend more.
Problems: Too flimsy in general, and particularly at tips.
[[File:Gripper version 4.jpg|none|300px|Updated version 4]]
=== Updated Version 5 ===
Restored rib material and eliminated one triangular cut to solve tearing issue, attempted to print entirely without supports.
Problems: Poor print quality due to lack of supports.
[[File:Updated version 5.jpg|none|300px|Updated version 5]]
=== Updated Version 6 ===
Used solid tip and fewer ribs in the other section. Assembly with screws remained awkward.
[[File:Stompy gripper version 6.jpg|none|300px|Updated version 6]]
=== Updated Version 7 ===
Decreased profile of rib material to make screw installation process easier.
3216e11c9ce591b54fe581111879d7bb472de824
1413
1410
2024-06-02T06:48:42Z
Dymaxion
22
/* Updated Version 7 */
wikitext
text/x-wiki
== Gripper Design Iterations ==
=== Original Design ===
Problems: Too flexible at fingertips, too stiff in inner surface.
[[File:Gripper 0.jpg|none|300px|The non-optimized gripper based on the UMI study]]
=== Updated Version 1 ===
Made ribs thinner at their ends so that they would bend more easily. Added triangular cuts along the inner surface to allow the material to flex.
Problems: The triangular cut closest to the tip caused the inner surface of the grippers to tear.
[[File:Gripper version 1.jpg|none|300px|Updated version 1]]
=== Updated Version 2 ===
Moved slots to only align with the three thickest ribs, reducing potential for tears. Created an angled profile where the tips of the grippers are narrower than the base.
Problems: Screw holes were too large for screws to be inserted from below, tears persisted in some cases.
[[File:Stompy gripper 2.jpg|none|300px|Updated Version 2]]
=== Updated Version 3 ===
Decreased hole size to allow for better threading.
Problems: Tears persisted, insufficient clearance for hex key to actually tighten the screws from above (necessary for use with slide rail).
[[File:Stompy gripper version 3.jpg|none|300px|Updated version 3]]
=== Updated Version 4 ===
Decreased profile of rib material to make screw installation process easier, decreased cross section of individual ribs to allow them to bend more.
Problems: Too flimsy in general, and particularly at tips.
[[File:Gripper version 4.jpg|none|300px|Updated version 4]]
=== Updated Version 5 ===
Restored rib material and eliminated one triangular cut to solve tearing issue, attempted to print entirely without supports.
Problems: Poor print quality due to lack of supports.
[[File:Updated version 5.jpg|none|300px|Updated version 5]]
=== Updated Version 6 ===
Used solid tip and fewer ribs in the other section. Assembly with screws remained awkward.
[[File:Stompy gripper version 6.jpg|none|300px|Updated version 6]]
=== Updated Version 7 ===Switched cuts to the inside wall and eliminated redundant rib.
53b89cf0de524897b257f20dceae497eae52156c
1414
1413
2024-06-02T06:52:21Z
Dymaxion
22
/* Updated Version 6 */
wikitext
text/x-wiki
== Gripper Design Iterations ==
=== Original Design ===
Problems: Too flexible at fingertips, too stiff in inner surface.
[[File:Gripper 0.jpg|none|300px|The non-optimized gripper based on the UMI study]]
=== Updated Version 1 ===
Made ribs thinner at their ends so that they would bend more easily. Added triangular cuts along the inner surface to allow the material to flex.
Problems: The triangular cut closest to the tip caused the inner surface of the grippers to tear.
[[File:Gripper version 1.jpg|none|300px|Updated version 1]]
=== Updated Version 2 ===
Moved slots to only align with the three thickest ribs, reducing potential for tears. Created an angled profile where the tips of the grippers are narrower than the base.
Problems: Screw holes were too large for screws to be inserted from below, tears persisted in some cases.
[[File:Stompy gripper 2.jpg|none|300px|Updated Version 2]]
=== Updated Version 3 ===
Decreased hole size to allow for better threading.
Problems: Tears persisted, insufficient clearance for hex key to actually tighten the screws from above (necessary for use with slide rail).
[[File:Stompy gripper version 3.jpg|none|300px|Updated version 3]]
=== Updated Version 4 ===
Decreased profile of rib material to make screw installation process easier, decreased cross section of individual ribs to allow them to bend more.
Problems: Too flimsy in general, and particularly at tips.
[[File:Gripper version 4.jpg|none|300px|Updated version 4]]
=== Updated Version 5 ===
Restored rib material and eliminated one triangular cut to solve tearing issue, attempted to print entirely without supports.
Problems: Poor print quality due to lack of supports.
[[File:Updated version 5.jpg|none|300px|Updated version 5]]
=== Updated Version 6 ===
Used solid tip and fewer ribs in the other section. Assembly with screws remained awkward.
[[File:Stompy gripper version 6.jpg|none|300px|Updated version 6]]
=== Updated Version 7 ===
Switched cuts to the inside wall and eliminated redundant rib.
6b85cab78ddf68f2f41a92667bb29fbaa4edd5b3
1415
1414
2024-06-02T08:15:19Z
Dymaxion
22
/* Original Design */
wikitext
text/x-wiki
== Gripper Design Iterations ==
=== Updated Version 7 ===
Switched cuts to the inside wall and eliminated redundant rib.
[[File:Stompy gripper version 7.jpg|none|300px|Updated version 7]]
=== Updated Version 1 ===
Made ribs thinner at their ends so that they would bend more easily. Added triangular cuts along the inner surface to allow the material to flex.
Problems: The triangular cut closest to the tip caused the inner surface of the grippers to tear.
[[File:Gripper version 1.jpg|none|300px|Updated version 1]]
=== Updated Version 2 ===
Moved slots to only align with the three thickest ribs, reducing potential for tears. Created an angled profile where the tips of the grippers are narrower than the base.
Problems: Screw holes were too large for screws to be inserted from below, tears persisted in some cases.
[[File:Stompy gripper 2.jpg|none|300px|Updated Version 2]]
=== Updated Version 3 ===
Decreased hole size to allow for better threading.
Problems: Tears persisted, insufficient clearance for hex key to actually tighten the screws from above (necessary for use with slide rail).
[[File:Stompy gripper version 3.jpg|none|300px|Updated version 3]]
=== Updated Version 4 ===
Decreased profile of rib material to make screw installation process easier, decreased cross section of individual ribs to allow them to bend more.
Problems: Too flimsy in general, and particularly at tips.
[[File:Gripper version 4.jpg|none|300px|Updated version 4]]
=== Updated Version 5 ===
Restored rib material and eliminated one triangular cut to solve tearing issue, attempted to print entirely without supports.
Problems: Poor print quality due to lack of supports.
[[File:Updated version 5.jpg|none|300px|Updated version 5]]
=== Updated Version 6 ===
Used solid tip and fewer ribs in the other section. Assembly with screws remained awkward.
[[File:Stompy gripper version 6.jpg|none|300px|Updated version 6]]
=== Updated Version 7 ===
Switched cuts to the inside wall and eliminated redundant rib.
43d1824872c0f4e7f199599254554ff5ffb3fcac
1417
1415
2024-06-02T08:17:03Z
Dymaxion
22
/* Updated Version 7 */
wikitext
text/x-wiki
== Gripper Design Iterations ==
=== Updated Version 7 ===
Switched cuts to the inside wall and eliminated redundant rib.
[[File:Stompy gripper version 7.jpg|none|300px|Updated version 7]]
=== Updated Version 1 ===
Made ribs thinner at their ends so that they would bend more easily. Added triangular cuts along the inner surface to allow the material to flex.
Problems: The triangular cut closest to the tip caused the inner surface of the grippers to tear.
[[File:Gripper version 1.jpg|none|300px|Updated version 1]]
=== Updated Version 2 ===
Moved slots to only align with the three thickest ribs, reducing potential for tears. Created an angled profile where the tips of the grippers are narrower than the base.
Problems: Screw holes were too large for screws to be inserted from below, tears persisted in some cases.
[[File:Stompy gripper 2.jpg|none|300px|Updated Version 2]]
=== Updated Version 3 ===
Decreased hole size to allow for better threading.
Problems: Tears persisted, insufficient clearance for hex key to actually tighten the screws from above (necessary for use with slide rail).
[[File:Stompy gripper version 3.jpg|none|300px|Updated version 3]]
=== Updated Version 4 ===
Decreased profile of rib material to make screw installation process easier, decreased cross section of individual ribs to allow them to bend more.
Problems: Too flimsy in general, and particularly at tips.
[[File:Gripper version 4.jpg|none|300px|Updated version 4]]
=== Updated Version 5 ===
Restored rib material and eliminated one triangular cut to solve tearing issue, attempted to print entirely without supports.
Problems: Poor print quality due to lack of supports.
[[File:Updated version 5.jpg|none|300px|Updated version 5]]
=== Updated Version 6 ===
Used solid tip and fewer ribs in the other section. Assembly with screws remained awkward.
[[File:Stompy gripper version 6.jpg|none|300px|Updated version 6]]
=== Updated Version 7 ===
Switched cuts to the inside wall and eliminated redundant rib.
[[File:Stompy gripper 7.jpg|none|300px|Updated version 7]]
9acc0789755e8a791dd0d7b55fbd3d8d2bdc3625
1419
1417
2024-06-02T08:21:08Z
Dymaxion
22
/* Updated Version 7 */
wikitext
text/x-wiki
== Gripper Design Iterations ==
=== Updated Version 7 ===
Switched cuts to the inside wall and eliminated redundant rib.
[[File:Stompy gripper version 7.jpg|none|300px|Updated version 7]]
=== Updated Version 1 ===
Made ribs thinner at their ends so that they would bend more easily. Added triangular cuts along the inner surface to allow the material to flex.
Problems: The triangular cut closest to the tip caused the inner surface of the grippers to tear.
[[File:Gripper version 1.jpg|none|300px|Updated version 1]]
=== Updated Version 2 ===
Moved slots to only align with the three thickest ribs, reducing potential for tears. Created an angled profile where the tips of the grippers are narrower than the base.
Problems: Screw holes were too large for screws to be inserted from below, tears persisted in some cases.
[[File:Stompy gripper 2.jpg|none|300px|Updated Version 2]]
=== Updated Version 3 ===
Decreased hole size to allow for better threading.
Problems: Tears persisted, insufficient clearance for hex key to actually tighten the screws from above (necessary for use with slide rail).
[[File:Stompy gripper version 3.jpg|none|300px|Updated version 3]]
=== Updated Version 4 ===
Decreased profile of rib material to make screw installation process easier, decreased cross section of individual ribs to allow them to bend more.
Problems: Too flimsy in general, and particularly at tips.
[[File:Gripper version 4.jpg|none|300px|Updated version 4]]
=== Updated Version 5 ===
Restored rib material and eliminated one triangular cut to solve tearing issue, attempted to print entirely without supports.
Problems: Poor print quality due to lack of supports.
[[File:Updated version 5.jpg|none|300px|Updated version 5]]
=== Updated Version 6 ===
Used solid tip and fewer ribs in the other section. Assembly with screws remained awkward.
[[File:Stompy gripper version 6.jpg|none|300px|Updated version 6]]
=== Updated Version 7 ===
Switched cuts to the inside wall and eliminated redundant rib.
[[File:Stompy gripper 7.jpg|thumb]]
aae55da46fad7e56cea38f6f69fc7e5ee585ff0e
1420
1419
2024-06-02T08:28:27Z
Dymaxion
22
/* Updated Version 7 */
wikitext
text/x-wiki
== Gripper Design Iterations ==
=== Updated Version 7 ===
Switched cuts to the inside wall and eliminated redundant rib.
[[File:Stompy gripper version 7.jpg|none|300px|Updated version 7]]
=== Updated Version 1 ===
Made ribs thinner at their ends so that they would bend more easily. Added triangular cuts along the inner surface to allow the material to flex.
Problems: The triangular cut closest to the tip caused the inner surface of the grippers to tear.
[[File:Gripper version 1.jpg|none|300px|Updated version 1]]
=== Updated Version 2 ===
Moved slots to only align with the three thickest ribs, reducing potential for tears. Created an angled profile where the tips of the grippers are narrower than the base.
Problems: Screw holes were too large for screws to be inserted from below, tears persisted in some cases.
[[File:Stompy gripper 2.jpg|none|300px|Updated Version 2]]
=== Updated Version 3 ===
Decreased hole size to allow for better threading.
Problems: Tears persisted, insufficient clearance for hex key to actually tighten the screws from above (necessary for use with slide rail).
[[File:Stompy gripper version 3.jpg|none|300px|Updated version 3]]
=== Updated Version 4 ===
Decreased profile of rib material to make screw installation process easier, decreased cross section of individual ribs to allow them to bend more.
Problems: Too flimsy in general, and particularly at tips.
[[File:Gripper version 4.jpg|none|300px|Updated version 4]]
=== Updated Version 5 ===
Restored rib material and eliminated one triangular cut to solve tearing issue, attempted to print entirely without supports.
Problems: Poor print quality due to lack of supports.
[[File:Updated version 5.jpg|none|300px|Updated version 5]]
=== Updated Version 6 ===
Used solid tip and fewer ribs in the other section. Assembly with screws remained awkward.
[[File:Stompy gripper version 6.jpg|none|300px|Updated version 6]]
=== Updated Version 7 ===
Switched cuts to the inside wall and eliminated redundant rib.
[[File:Stompy gripper version 7.jpg|none|300px|Updated version 7]]
6799f26f51363f7989fa52d8b602669a5f624b76
1421
1420
2024-06-02T08:29:10Z
Dymaxion
22
/* Updated Version 7 */
wikitext
text/x-wiki
== Gripper Design Iterations ==
=== Updated Version 7 ===
Switched cuts to the inside wall and eliminated redundant rib.
[[File:Stompy gripper version 7.jpg|none|300px|Updated version 7]]
=== Updated Version 1 ===
Made ribs thinner at their ends so that they would bend more easily. Added triangular cuts along the inner surface to allow the material to flex.
Problems: The triangular cut closest to the tip caused the inner surface of the grippers to tear.
[[File:Gripper version 1.jpg|none|300px|Updated version 1]]
=== Updated Version 2 ===
Moved slots to only align with the three thickest ribs, reducing potential for tears. Created an angled profile where the tips of the grippers are narrower than the base.
Problems: Screw holes were too large for screws to be inserted from below, tears persisted in some cases.
[[File:Stompy gripper 2.jpg|none|300px|Updated Version 2]]
=== Updated Version 3 ===
Decreased hole size to allow for better threading.
Problems: Tears persisted, insufficient clearance for hex key to actually tighten the screws from above (necessary for use with slide rail).
[[File:Stompy gripper version 3.jpg|none|300px|Updated version 3]]
=== Updated Version 4 ===
Decreased profile of rib material to make screw installation process easier, decreased cross section of individual ribs to allow them to bend more.
Problems: Too flimsy in general, and particularly at tips.
[[File:Gripper version 4.jpg|none|300px|Updated version 4]]
=== Updated Version 5 ===
Restored rib material and eliminated one triangular cut to solve tearing issue, attempted to print entirely without supports.
Problems: Poor print quality due to lack of supports.
[[File:Updated version 5.jpg|none|300px|Updated version 5]]
=== Updated Version 6 ===
Used solid tip and fewer ribs in the other section. Assembly with screws remained awkward.
[[File:Stompy gripper version 6.jpg|none|300px|Updated version 6]]
=== Updated Version 7 ===
Switched cuts to the inside wall and eliminated redundant rib.
[[File:Stompy gripper 7.jpg|none|300px|Updated version 7]]
9acc0789755e8a791dd0d7b55fbd3d8d2bdc3625
1422
1421
2024-06-02T08:29:29Z
Dymaxion
22
/* Updated Version 7 */
wikitext
text/x-wiki
== Gripper Design Iterations ==
=== Updated Version 7 ===
Switched cuts to the inside wall and eliminated redundant rib.
[[File:Stompy gripper version 7.jpg|none|300px|Updated version 7]]
=== Updated Version 1 ===
Made ribs thinner at their ends so that they would bend more easily. Added triangular cuts along the inner surface to allow the material to flex.
Problems: The triangular cut closest to the tip caused the inner surface of the grippers to tear.
[[File:Gripper version 1.jpg|none|300px|Updated version 1]]
=== Updated Version 2 ===
Moved slots to only align with the three thickest ribs, reducing potential for tears. Created an angled profile where the tips of the grippers are narrower than the base.
Problems: Screw holes were too large for screws to be inserted from below, tears persisted in some cases.
[[File:Stompy gripper 2.jpg|none|300px|Updated Version 2]]
=== Updated Version 3 ===
Decreased hole size to allow for better threading.
Problems: Tears persisted, insufficient clearance for hex key to actually tighten the screws from above (necessary for use with slide rail).
[[File:Stompy gripper version 3.jpg|none|300px|Updated version 3]]
=== Updated Version 4 ===
Decreased profile of rib material to make screw installation process easier, decreased cross section of individual ribs to allow them to bend more.
Problems: Too flimsy in general, and particularly at tips.
[[File:Gripper version 4.jpg|none|300px|Updated version 4]]
=== Updated Version 5 ===
Restored rib material and eliminated one triangular cut to solve tearing issue, attempted to print entirely without supports.
Problems: Poor print quality due to lack of supports.
[[File:Updated version 5.jpg|none|300px|Updated version 5]]
=== Updated Version 6 ===
Used solid tip and fewer ribs in the other section. Assembly with screws remained awkward.
[[File:Stompy gripper version 6.jpg|none|300px|Updated version 6]]
=== Updated Version 7 ===
Switched cuts to the inside wall and eliminated redundant rib.
[[File:Stomy gripper 7.jpg|none|300px|Updated version 7]]
c219b615236204b6f4eeb05ba31755c48c96bc28
1428
1422
2024-06-03T21:48:48Z
Dymaxion
22
wikitext
text/x-wiki
== Gripper Design Iterations ==
=== Updated Version 7 ===
Switched cuts to the inside wall and eliminated redundant rib.
[[File:Stompy gripper version 7.jpg|none|300px|Updated version 7]]
=== Updated Version 1 ===
Made ribs thinner at their ends so that they would bend more easily. Added triangular cuts along the inner surface to allow the material to flex.
Problems: The triangular cut closest to the tip caused the inner surface of the grippers to tear.
[[File:Gripper version 1.jpg|none|300px|Updated version 1]]
=== Updated Version 2 ===
Moved slots to only align with the three thickest ribs, reducing potential for tears. Created an angled profile where the tips of the grippers are narrower than the base.
Problems: Screw holes were too large for screws to be inserted from below, tears persisted in some cases.
[[File:Stompy gripper 2.jpg|none|300px|Updated Version 2]]
=== Updated Version 3 ===
Decreased hole size to allow for better threading.
Problems: Tears persisted, insufficient clearance for hex key to actually tighten the screws from above (necessary for use with slide rail).
[[File:Stompy gripper version 3.jpg|none|300px|Updated version 3]]
=== Updated Version 4 ===
Decreased profile of rib material to make screw installation process easier, decreased cross section of individual ribs to allow them to bend more.
Problems: Too flimsy in general, and particularly at tips.
[[File:Gripper version 4.jpg|none|300px|Updated version 4]]
=== Updated Version 5 ===
Restored rib material and eliminated one triangular cut to solve tearing issue, attempted to print entirely without supports.
Problems: Poor print quality due to lack of supports.
[[File:Updated version 5.jpg|none|300px|Updated version 5]]
=== Updated Version 6 ===
Used solid tip and fewer ribs in the other section. Assembly with screws remained awkward.
[[File:Stompy gripper version 6.jpg|none|300px|Updated version 6]]
=== Updated Version 7 ===
Switched cuts to the inside wall and eliminated redundant rib.
[[File:Stomy gripper 7.jpg|none|300px|Updated version 7]]
=== Updated Version 8 ===
Tapered inside wall and extended it to the now restored rib.
[[File:Stompy gripper 8.jpg|thumb|Updated version 8]]
99e04c142a5d1af0fbdc6141461c8deeec06a7b9
1429
1428
2024-06-03T21:49:35Z
Dymaxion
22
wikitext
text/x-wiki
== Gripper Design Iterations ==
=== Updated Version 1 ===
Made ribs thinner at their ends so that they would bend more easily. Added triangular cuts along the inner surface to allow the material to flex.
Problems: The triangular cut closest to the tip caused the inner surface of the grippers to tear.
[[File:Gripper version 1.jpg|none|300px|Updated version 1]]
=== Updated Version 2 ===
Moved slots to only align with the three thickest ribs, reducing potential for tears. Created an angled profile where the tips of the grippers are narrower than the base.
Problems: Screw holes were too large for screws to be inserted from below, tears persisted in some cases.
[[File:Stompy gripper 2.jpg|none|300px|Updated Version 2]]
=== Updated Version 3 ===
Decreased hole size to allow for better threading.
Problems: Tears persisted, insufficient clearance for hex key to actually tighten the screws from above (necessary for use with slide rail).
[[File:Stompy gripper version 3.jpg|none|300px|Updated version 3]]
=== Updated Version 4 ===
Decreased profile of rib material to make screw installation process easier, decreased cross section of individual ribs to allow them to bend more.
Problems: Too flimsy in general, and particularly at tips.
[[File:Gripper version 4.jpg|none|300px|Updated version 4]]
=== Updated Version 5 ===
Restored rib material and eliminated one triangular cut to solve tearing issue, attempted to print entirely without supports.
Problems: Poor print quality due to lack of supports.
[[File:Updated version 5.jpg|none|300px|Updated version 5]]
=== Updated Version 6 ===
Used solid tip and fewer ribs in the other section. Assembly with screws remained awkward.
[[File:Stompy gripper version 6.jpg|none|300px|Updated version 6]]
=== Updated Version 7 ===
Switched cuts to the inside wall and eliminated redundant rib.
[[File:Stomy gripper 7.jpg|none|300px|Updated version 7]]
=== Updated Version 8 ===
Tapered inside wall and extended it to the now restored rib.
[[File:Stompy gripper 8.jpg|none|300px|Updated version 8]]
24dd4804ccec62a6f3f09a71348277ab1aa52126
File:Stompy gripper version 6.jpg
6
311
1411
2024-06-01T05:04:55Z
Dymaxion
22
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Main Page
0
1
1412
1398
2024-06-01T22:06:50Z
108.211.178.220
0
/* Getting Started */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
67468244a9e6a8cefbef04920b9b05dd94ba3e47
1453
1412
2024-06-06T00:28:49Z
Admin
1
/* List of Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
8ac879d0730ee9a580fdbf87a1b857d995864f0b
File:Stomy gripper 7.jpg
6
312
1416
2024-06-02T08:16:38Z
Dymaxion
22
wikitext
text/x-wiki
A photo of one stompy gripper with a large lower void.
529dda8186713569c55d73c3ed8cc844d0629c90
File:Stompy gripper 7 1.jpg
6
313
1418
2024-06-02T08:18:21Z
Dymaxion
22
A photo of one stompy gripper with a large void and triangles on the back side of the gripping face.
wikitext
text/x-wiki
== Summary ==
A photo of one stompy gripper with a large void and triangles on the back side of the gripping face.
21d70f088195ac880a2af9b0b835cc9b3d6caca0
K-Scale Lecture Circuit
0
299
1423
1399
2024-06-03T17:35:55Z
108.211.178.220
0
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
! Link
|-
| Add next one
|
|
|-
| 2024.06.07
| Ben
| Quantization
|
|-
| 2024.06.06
| Tom
| Linux Raw
|
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|
|-
| 2024.06.04
| Paweł
| What I (want to) believe in
|
|-
| 2024.06.03
| Dennis
| Speech representation learning papers
|
|-
| 2024.05.31
|
|
|
|-
| 2024.05.30
| Isaac
| VLMs
|
|-
| 2024.05.29
| Allen
| PPO
|
|}
[[Category: K-Scale]]
b903b9676410d1c64ae441eb634afea133480d0a
1430
1423
2024-06-03T22:21:55Z
Budzianowski
19
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
! Link
|-
| Add next one
|
|
|-
| 2024.06.07
| Ben
| Quantization
|
|-
| 2024.06.06
| Tom
| Linux Raw
|
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|-
| 2024.06.04
| Dennis
| Speech representation learning papers
|
|-
| 2024.06.03
| Paweł
| What I (want to) believe in
|
|-
| 2024.05.30
| Isaac
| VLMs
|
|-
| 2024.05.29
| Allen
| PPO
|
|}
[[Category: K-Scale]]
07e0d9775a820808805c546b4f6f20789443d72a
1444
1430
2024-06-05T01:52:42Z
Admin
1
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
! Link
|-
|
| Timothy
| NeRFs
|
|-
| 2024.06.07
| Ben
| Quantization
|
|-
| 2024.06.06
| Tom
| Linux Raw
|
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|-
| 2024.06.04
| Dennis
| Speech representation learning papers
|
|-
| 2024.06.03
| Paweł
| What I (want to) believe in
|
|-
| 2024.05.30
| Isaac
| VLMs
|
|-
| 2024.05.29
| Allen
| PPO
|
|}
[[Category: K-Scale]]
6e56a3ae939c10ebf1f48ec2fa5f36c801023309
1445
1444
2024-06-05T01:53:03Z
Admin
1
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
|-
|
| Timothy
| NeRFs
|-
| 2024.06.07
| Ben
| Quantization
|-
| 2024.06.06
| Tom
| Linux Raw
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|-
| 2024.06.04
| Dennis
| Speech representation learning papers
|-
| 2024.06.03
| Paweł
| What I (want to) believe in
|-
| 2024.05.30
| Isaac
| VLMs
|-
| 2024.05.29
| Allen
| PPO
|}
[[Category: K-Scale]]
5266ddb69aca91df94ba40c603841667b02545a5
Robot Descriptions List
0
281
1424
1374
2024-06-03T18:26:08Z
Vrtnis
21
/*Add BHR-4*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 34 || MIR-Lola || Munich Institute of Robotics and Machine Intelligence || URDF || [https://github.com/mir-lab/lola_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 35 || HSR || Toyota || URDF || [https://github.com/toyota-research-institute/hsr_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 36 || Pepper 2 || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
-
| 37 || BHR-4 || Beijing Institute of Technology || URDF || [https://github.com/bit-bots/bhr4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
28b6c494ffcb6d7810ec0cbd0b49386bf49e78d6
1425
1424
2024-06-03T18:27:08Z
Vrtnis
21
/*Add BHR-4*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 34 || MIR-Lola || Munich Institute of Robotics and Machine Intelligence || URDF || [https://github.com/mir-lab/lola_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 35 || HSR || Toyota || URDF || [https://github.com/toyota-research-institute/hsr_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 36 || Pepper 2 || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 37 || BHR-4 || Beijing Institute of Technology || URDF || [https://github.com/bit-bots/bhr4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
07deea957a246684a127cdfc9ad7653dc51ad538
1426
1425
2024-06-03T18:32:04Z
Vrtnis
21
/*Add Tiago and InMoov*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 34 || MIR-Lola || Munich Institute of Robotics and Machine Intelligence || URDF || [https://github.com/mir-lab/lola_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 35 || HSR || Toyota || URDF || [https://github.com/toyota-research-institute/hsr_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 36 || Pepper 2 || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 37 || BHR-4 || Beijing Institute of Technology || URDF || [https://github.com/bit-bots/bhr4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 38 || Tiago || PAL Robotics || URDF || [https://github.com/pal-robotics/tiago_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 39 || InMoov || Gael Langevin || URDF || [https://github.com/InMoov/inmoov_ros URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
b1e4a5cc3c0f88a0e31cd7b63a7dcef996cf7b35
1446
1426
2024-06-05T19:37:41Z
Vrtnis
21
/*Add OpenAI BipedalWalker*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 34 || MIR-Lola || Munich Institute of Robotics and Machine Intelligence || URDF || [https://github.com/mir-lab/lola_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 35 || HSR || Toyota || URDF || [https://github.com/toyota-research-institute/hsr_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 36 || Pepper 2 || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 37 || BHR-4 || Beijing Institute of Technology || URDF || [https://github.com/bit-bots/bhr4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 38 || Tiago || PAL Robotics || URDF || [https://github.com/pal-robotics/tiago_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 39 || InMoov || Gael Langevin || URDF || [https://github.com/InMoov/inmoov_ros URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 40 || BipedalWalker || OpenAI Gym || URDF || [https://github.com/openai/gym/tree/master/gym/envs/robotics/assets/bipedal_walker URDF] || MIT || ✔️ || ✔️ || ✔️
|-
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
00763d39c5ba867005ad46f641bb827a79bddd63
1447
1446
2024-06-05T19:42:07Z
Vrtnis
21
/* References */
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 34 || MIR-Lola || Munich Institute of Robotics and Machine Intelligence || URDF || [https://github.com/mir-lab/lola_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 35 || HSR || Toyota || URDF || [https://github.com/toyota-research-institute/hsr_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 36 || Pepper 2 || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 37 || BHR-4 || Beijing Institute of Technology || URDF || [https://github.com/bit-bots/bhr4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 38 || Tiago || PAL Robotics || URDF || [https://github.com/pal-robotics/tiago_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 39 || InMoov || Gael Langevin || URDF || [https://github.com/InMoov/inmoov_ros URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 40 || BipedalWalker || OpenAI Gym || URDF || [https://github.com/openai/gym/tree/master/gym/envs/robotics/assets/bipedal_walker URDF] || MIT || ✔️ || ✔️ || ✔️
|-
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* GitHub and web searches
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
a6d542853f67af75cfc04c9966cfc29c5b9b4fd7
1448
1447
2024-06-05T19:47:08Z
Vrtnis
21
/*Add cite*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 34 || MIR-Lola || Munich Institute of Robotics and Machine Intelligence || URDF || [https://github.com/mir-lab/lola_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 35 || HSR || Toyota || URDF || [https://github.com/toyota-research-institute/hsr_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 36 || Pepper 2 || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 37 || BHR-4 || Beijing Institute of Technology || URDF || [https://github.com/bit-bots/bhr4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 38 || Tiago || PAL Robotics || URDF || [https://github.com/pal-robotics/tiago_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 39 || InMoov || Gael Langevin || URDF || [https://github.com/InMoov/inmoov_ros URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 40 || BipedalWalker || OpenAI Gym || URDF || [https://github.com/openai/gym/tree/master/gym/envs/robotics/assets/bipedal_walker URDF] || MIT || ✔️ || ✔️ || ✔️
|-
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* GitHub and web searches
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
@misc{humanoids-2024,
title={Robot Descriptions List},
author={K-Scale Humanoids Wiki Contributors},
year={2024},
url={https://humanoids.wiki/w/Robot_Descriptions_List}
}
b68b27443341884a635a6e2be3c96ef3a0fc3348
1449
1448
2024-06-05T19:47:42Z
Vrtnis
21
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 34 || MIR-Lola || Munich Institute of Robotics and Machine Intelligence || URDF || [https://github.com/mir-lab/lola_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 35 || HSR || Toyota || URDF || [https://github.com/toyota-research-institute/hsr_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 36 || Pepper 2 || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 37 || BHR-4 || Beijing Institute of Technology || URDF || [https://github.com/bit-bots/bhr4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 38 || Tiago || PAL Robotics || URDF || [https://github.com/pal-robotics/tiago_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 39 || InMoov || Gael Langevin || URDF || [https://github.com/InMoov/inmoov_ros URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 40 || BipedalWalker || OpenAI Gym || URDF || [https://github.com/openai/gym/tree/master/gym/envs/robotics/assets/bipedal_walker URDF] || MIT || ✔️ || ✔️ || ✔️
|-
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* GitHub and web searches
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
== Citation ==
{@misc{humanoids-2024,
title={Robot Descriptions List},
author={K-Scale Humanoids Wiki Contributors},
year={2024},
url={https://humanoids.wiki/w/Robot_Descriptions_List}
}
d37ec6e202b14ba5f6c39655a57fb30f1c01827f
1450
1449
2024-06-05T19:50:53Z
Vrtnis
21
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 34 || MIR-Lola || Munich Institute of Robotics and Machine Intelligence || URDF || [https://github.com/mir-lab/lola_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 35 || HSR || Toyota || URDF || [https://github.com/toyota-research-institute/hsr_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 36 || Pepper 2 || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 37 || BHR-4 || Beijing Institute of Technology || URDF || [https://github.com/bit-bots/bhr4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 38 || Tiago || PAL Robotics || URDF || [https://github.com/pal-robotics/tiago_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 39 || InMoov || Gael Langevin || URDF || [https://github.com/InMoov/inmoov_ros URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 40 || BipedalWalker || OpenAI Gym || URDF || [https://github.com/openai/gym/tree/master/gym/envs/robotics/assets/bipedal_walker URDF] || MIT || ✔️ || ✔️ || ✔️
|-
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* GitHub and web searches
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
== Citation ==
<pre>
@misc{humanoids-2024,
title={Robot Descriptions List},
author={K-Scale Humanoids Wiki Authors},
year={2024},
url={https://humanoids.wiki/w/Robot_Descriptions_List}
}
</pre>
18b54daea8139fbc158420f7084d3d022dd8e57d
1451
1450
2024-06-05T19:51:08Z
Vrtnis
21
/* Citation */
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 34 || MIR-Lola || Munich Institute of Robotics and Machine Intelligence || URDF || [https://github.com/mir-lab/lola_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 35 || HSR || Toyota || URDF || [https://github.com/toyota-research-institute/hsr_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 36 || Pepper 2 || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 37 || BHR-4 || Beijing Institute of Technology || URDF || [https://github.com/bit-bots/bhr4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 38 || Tiago || PAL Robotics || URDF || [https://github.com/pal-robotics/tiago_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 39 || InMoov || Gael Langevin || URDF || [https://github.com/InMoov/inmoov_ros URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 40 || BipedalWalker || OpenAI Gym || URDF || [https://github.com/openai/gym/tree/master/gym/envs/robotics/assets/bipedal_walker URDF] || MIT || ✔️ || ✔️ || ✔️
|-
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* GitHub and web searches
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
== Citation ==
<pre>
@misc{humanoids-2024,
title={Robot Descriptions List},
author={K-Scale Humanoids Wiki Contributors},
year={2024},
url={https://humanoids.wiki/w/Robot_Descriptions_List}
}
</pre>
778cae7cbe49415b938a21116b7d71a8baae90f2
1452
1451
2024-06-05T19:53:07Z
Vrtnis
21
/*Add BOLT*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 34 || MIR-Lola || Munich Institute of Robotics and Machine Intelligence || URDF || [https://github.com/mir-lab/lola_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 35 || HSR || Toyota || URDF || [https://github.com/toyota-research-institute/hsr_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 36 || Pepper 2 || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 37 || BHR-4 || Beijing Institute of Technology || URDF || [https://github.com/bit-bots/bhr4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 38 || Tiago || PAL Robotics || URDF || [https://github.com/pal-robotics/tiago_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 39 || InMoov || Gael Langevin || URDF || [https://github.com/InMoov/inmoov_ros URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 40 || BipedalWalker || OpenAI Gym || URDF || [https://github.com/openai/gym/tree/master/gym/envs/robotics/assets/bipedal_walker URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 41 || BOLT || Istituto Italiano di Tecnologia || URDF || [https://github.com/robotology/icub-main/tree/master/app/robots/bolt URDF] || GPL-2.0 || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* GitHub and web searches
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
== Citation ==
<pre>
@misc{humanoids-2024,
title={Robot Descriptions List},
author={K-Scale Humanoids Wiki Contributors},
year={2024},
url={https://humanoids.wiki/w/Robot_Descriptions_List}
}
</pre>
446c7eb347cf6d1486ce9b69c1714116a2d50371
File:Stompy gripper 8.jpg
6
314
1427
2024-06-03T21:47:58Z
Dymaxion
22
wikitext
text/x-wiki
Two stompy grippers screwed to another part.
01f93ee70c12597b5f354b4ffc1d69448883c442
Nvidia Jetson: Flashing Custom Firmware
0
315
1431
2024-06-04T00:33:26Z
Vedant
24
Created page with "= Developing Custom Firmware = == For the Jetson Orin Nano == = Flashing = = Notes: - Current Design constraints: - Based off of the availability of parts in JLCPCB...."
wikitext
text/x-wiki
= Developing Custom Firmware =
== For the Jetson Orin Nano ==
= Flashing =
=
Notes:
- Current Design constraints:
- Based off of the availability of parts in JLCPCB. Possibility of parts not being found or existing.
-
- Flashing is done with the flash.sh script through the following command
$ sudo ./flash.sh <board> <rootdev>
where board is the actual board (Jetson-Nano-XX, etc.)
rootdev determines what type of device is being flahed. Use mmcblk0pc1 to flash a local storage device (eMMC or SD card)
- TO begin flashing, put the device into force recovery mode and then press reset.
- Run the flash script using the previous command specified.
Flash using a convenient script:
- To avoid having to specify the rootdev andthe board configurations, can use the custom flashing script:
-
Using GPIO Pins to program protocol:
- you can use the rasberry pi libraries to interface with the pins, configuring them to whatever layout that is needed.
- Example: it is possible to direclty interface with the i2c system in the nano by using the
e77d9c0f90d864de947fc8cd7b5336f43a483839
1432
1431
2024-06-04T00:34:33Z
108.211.178.220
0
wikitext
text/x-wiki
= Developing Custom Firmware =
== For the Jetson Orin Nano ==
= Flashing =
=
Notes:
- Current Design constraints:
- Based off of the availability of parts in JLCPCB. Possibility of parts not being found or existing.
-
- Flashing is done with the flash.sh script through the following command
$ sudo ./flash.sh <board> <rootdev>
where board is the actual board (Jetson-Nano-XX, etc.)
rootdev determines what type of device is being flahed. Use mmcblk0pc1 to flash a local storage device (eMMC or SD card)
- TO begin flashing, put the device into force recovery mode and then press reset.
- Run the flash script using the previous command specified.
Flash using a convenient script:
- To avoid having to specify the rootdev andthe board configurations, can use the custom flashing script:
-
Using GPIO Pins to program protocol:
- you can use the rasberry pi libraries to interface with the pins, configuring them to whatever layout that is needed.
- Example: it is possible to direclty interface with the i2c system in the nano by using the linux terminal itself.
7ab0016f09aa045bc7240b59fd1d7ffd9cceaa01
1433
1432
2024-06-04T00:35:52Z
108.211.178.220
0
wikitext
text/x-wiki
= Developing Custom Firmware =
== For the Jetson Orin Nano ==
= Flashing =
Notes:
- Current Design constraints:
- Based off of the availability of parts in JLCPCB. Possibility of parts not being found or existing.
-
- Flashing is done with the flash.sh script through the following command
$ sudo ./flash.sh <board> <rootdev>
where board is the actual board (Jetson-Nano-XX, etc.)
rootdev determines what type of device is being flahed. Use mmcblk0pc1 to flash a local storage device (eMMC or SD card)
- TO begin flashing, put the device into force recovery mode and then press reset.
- Run the flash script using the previous command specified.
Flash using a convenient script:
- To avoid having to specify the rootdev andthe board configurations, can use the custom flashing script:
-
Using GPIO Pins to program protocol:
- you can use the rasberry pi libraries to interface with the pins, configuring them to whatever layout that is needed.
- Example: it is possible to direclty interface with the i2c system in the nano by using the linux terminal itself.
43957c0152a5c1707928287f61556ae75b891f46
1434
1433
2024-06-04T00:36:07Z
108.211.178.220
0
/* For the Jetson Orin Nano */
wikitext
text/x-wiki
= Developing Custom Firmware =
== For the Jetson Orin Nano ==
= Flashing =
Notes:
- Current Design constraints:
- Based off of the availability of parts in JLCPCB. Possibility of parts not being found or existing.
-
- Flashing is done with the flash.sh script through the following command
$ sudo ./flash.sh <board> <rootdev>
where board is the actual board (Jetson-Nano-XX, etc.)
rootdev determines what type of device is being flahed. Use mmcblk0pc1 to flash a local storage device (eMMC or SD card)
- TO begin flashing, put the device into force recovery mode and then press reset.
- Run the flash script using the previous command specified.
Flash using a convenient script:
- To avoid having to specify the rootdev andthe board configurations, can use the custom flashing script:
-
Using GPIO Pins to program protocol:
- you can use the rasberry pi libraries to interface with the pins, configuring them to whatever layout that is needed.
- Example: it is possible to direclty interface with the i2c system in the nano by using the linux terminal itself.
7c3e0d5c465aa1b20aba6d77c3b39a981a7034d5
1435
1434
2024-06-04T00:38:01Z
108.211.178.220
0
/* Flashing */
wikitext
text/x-wiki
= Developing Custom Firmware =
== For the Jetson Orin Nano ==
= Flashing =
Notes:
- Current Design constraints:
- Based off of the availability of parts in JLCPCB. Possibility of parts not being found or existing.
-
- Flashing is done with the flash.sh script through the following command
$ sudo ./flash.sh <board> <rootdev>
where board is the actual board (Jetson-Nano-XX, etc.)
rootdev determines what type of device is being flahed. Use mmcblk0pc1 to flash a local storage device (eMMC or SD card)
- TO begin flashing, put the device into force recovery mode and then press reset.
- Run the flash script using the previous command specified.
Flash using a convenient script:
- To avoid having to specify the rootdev and the board configurations, can use the custom flashing script:
-
Using GPIO Pins to program protocol:
- you can use the rasberry pi libraries to interface with the pins, configuring them to whatever layout that is needed.
- Example: it is possible to direclty interface with the i2c system in the nano by using the linux terminal itself.
Current Game Plan:
- mess around with the programming of the GPIO pins: Figure out if there are ways to choose access teh data that the GPIO pins are Or Receiving.
Test if it is possible to reconfigure the pins on the jetson on the firmware side
db5cd86b096e03b7fd3f16425217135076e291d9
1438
1435
2024-06-04T09:47:13Z
108.211.178.220
0
wikitext
text/x-wiki
= Developing Custom Firmware =
== For the Jetson Orin Nano ==
= Flashing =
Notes:
- Current Design constraints:
- Based off of the availability of parts in JLCPCB. Possibility of parts not being found or existing.
-
- Flashing is done with the flash.sh script through the following command
$ sudo ./flash.sh <board> <rootdev>
where board is the actual board (Jetson-Nano-XX, etc.)
rootdev determines what type of device is being flahed. Use mmcblk0pc1 to flash a local storage device (eMMC or SD card)
- TO begin flashing, put the device into force recovery mode and then press reset.
- Run the flash script using the previous command specified.
Flash using a convenient script:
- To avoid having to specify the rootdev and the board configurations, can use the custom flashing script:
-
Using GPIO Pins to program protocol:
- you can use the rasberry pi libraries to interface with the pins, configuring them to whatever layout that is needed.
- Example: it is possible to direclty interface with the i2c system in the nano by using the linux terminal itself.
Current Game Plan:
- mess around with the programming of the GPIO pins: Figure out if there are ways to choose access teh data that the GPIO pins are Or Receiving.
Test if it is possible to reconfigure the pins on the jetson on the firmware side
Build Time:
- On a single Nvidia Nano, it takes about 45 mins - 1 hour to complete the build. The build is encrypted through RSA. The source is still accesssible and every time changes are made to the source files and want ot be reflected, the build files have to be remade. Current goal (Figure out if there's a way to make specific build files to decrease development time).
Notes:
- Need to install libssl-dev (depends on whether the certain packages are included when running the script)
32c769652bba1b1457e0df3c11d74abeab858209
1442
1438
2024-06-04T18:23:46Z
Goblinrum
25
wikitext
text/x-wiki
= Developing Custom Firmware =
== For the Jetson Orin Nano ==
= Flashing =
Notes:
- Current Design constraints:
- Based off of the availability of parts in JLCPCB. Possibility of parts not being found or existing.
-
- Flashing is done with the flash.sh script through the following command
$ sudo ./flash.sh <board> <rootdev>
where board is the actual board (Jetson-Nano-XX, etc.)
rootdev determines what type of device is being flahed. Use mmcblk0pc1 to flash a local storage device (eMMC or SD card)
- TO begin flashing, put the device into force recovery mode and then press reset.
- Run the flash script using the previous command specified.
Flash using a convenient script:
- To avoid having to specify the rootdev and the board configurations, can use the custom flashing script:
-
Using GPIO Pins to program protocol:
- you can use the rasberry pi libraries to interface with the pins, configuring them to whatever layout that is needed.
- Example: it is possible to direclty interface with the i2c system in the nano by using the linux terminal itself.
Current Game Plan:
- mess around with the programming of the GPIO pins: Figure out if there are ways to choose access teh data that the GPIO pins are Or Receiving.
Test if it is possible to reconfigure the pins on the jetson on the firmware side
Build Time:
- On a single Nvidia Nano, it takes about 45 mins - 1 hour to complete the build. The build is encrypted through RSA. The source is still accessible and every time changes are made to the source files and want ot be reflected, the build files have to be remade. Current goal (Figure out if there's a way to make specific build files to decrease development time).
Notes:
- General requirements to build Linux kernel still apply: e.g. `build-essentials`, `bc`, `libssl-dev`, etc
- Need to install `libssl-dev (depends on whether the certain packages are included when running the script)
66661a6a82fb7bfd82722736aaa29b96f343e2c8
K-Scale Operating System
0
316
1436
2024-06-04T07:42:40Z
Ben
2
Created page with "=== Links === * [https://developer.nvidia.com/embedded/jetson-linux NVIDIA Jetson Linux Driver Package]"
wikitext
text/x-wiki
=== Links ===
* [https://developer.nvidia.com/embedded/jetson-linux NVIDIA Jetson Linux Driver Package]
7d5230b8f59b0bb86246cbba31615219ec96c270
1437
1436
2024-06-04T07:43:07Z
Ben
2
wikitext
text/x-wiki
=== Links ===
* [https://developer.nvidia.com/embedded/jetson-linux NVIDIA Jetson Linux Driver Package]
* [https://docs.nvidia.com/jetson/archives/r35.4.1/DeveloperGuide/text/SD/Kernel/KernelCustomization.html Kernel customization]
e2006a6f5a6f42c314e3bd2b69e1ef2943faafcd
1443
1437
2024-06-04T18:54:53Z
Goblinrum
25
wikitext
text/x-wiki
=== Notes ===
You need to install the following if you are not compiling and building on the nano itself (standard for any attempt to build the Linux kernel yourself)
sudo apt-get install git fakeroot build-essential ncurses-dev xz-utils libssl-dev bc flex libelf-dev bison
Also, install the toolchain here: https://docs.nvidia.com/jetson/archives/r35.4.1/DeveloperGuide/text/AT/JetsonLinuxToolchain.html#at-jetsonlinuxtoolchain
=== Links ===
* [https://developer.nvidia.com/embedded/jetson-linux NVIDIA Jetson Linux Driver Package]
* [https://docs.nvidia.com/jetson/archives/r35.4.1/DeveloperGuide/text/SD/Kernel/KernelCustomization.html Kernel customization]
02a25909d14c2319c038ddd32e5b866b10d72db9
Jetson Orin Notes
0
218
1439
982
2024-06-04T16:00:54Z
Tom
23
wikitext
text/x-wiki
Notes on programming/interfacing with Jetson Orin hardware.
=== Upgrading AGX to Jetson Linux 36.3 ===
==== BSP approach (avoids SDK Manager) ====
* Requires Ubuntu 22.04. Very unhappy to work on Gentoo.
* Requires Intel/AMD 64bit CPU.
* Download "Driver Package (BSP)" from [https://developer.nvidia.com/embedded/jetson-linux here]
* Unpack (as root, get used to doing most of this as root), preserving privileges
** <code>tar xjpf ...</code>
* Download "Sample Root Filesystem"
* Unpack (as root..) into rootfs directory inside of the BSP archive above.
* Run <code>sudo ./tools/l4t_flash_prerequisites.sh</code>
* Run <code>./apply_binaries.sh</code> from the BSP
** Note: If apply_binaries (or frankly, anything, this is brittle) fails, remove and recreate rootfs - the OS might be left in an unbootable state.
* Reboot AGX into "Recovery Mode" - hold the recovery button and reset button, release simultaneously ((sic) reset first?)
* Connect USB-C cable to the debug port ("front" USB-c)
* Nvidia AGX device should appear in the <code>lsusb</code> under NVIDIA CORP. APX
* Run <code>./flash.sh </code> Different options for different usecases(https://docs.nvidia.com/jetson/archives/r36.3/DeveloperGuide/IN/QuickStart.html#in-quickstart)
Jetson AGX Orin Developer Kit (eMMC):
$ sudo ./flash.sh jetson-agx-orin-devkit internal
* Watch for few minutes, typically it crashes early, then go for lunch.
[[Category: Firmware]]
9786dd856928865a0d28510f06ea0918e1e8d9da
1440
1439
2024-06-04T16:01:48Z
Tom
23
/* BSP approach (avoids SDK Manager) */
wikitext
text/x-wiki
Notes on programming/interfacing with Jetson Orin hardware.
=== Upgrading AGX to Jetson Linux 36.3 ===
==== BSP approach (avoids SDK Manager) ====
* Requires Ubuntu 22.04. Very unhappy to work on Gentoo.
* Requires Intel/AMD 64bit CPU.
* Download "Driver Package (BSP)" from [https://developer.nvidia.com/embedded/jetson-linux here]
* Unpack (as root, get used to doing most of this as root), preserving privileges
** <code>tar xjpf ...</code>
* Download "Sample Root Filesystem"
* Unpack (as root..) into rootfs directory inside of the BSP archive above.
* Run <code>sudo ./tools/l4t_flash_prerequisites.sh</code>
* Run <code>./apply_binaries.sh</code> from the BSP
** Note: If apply_binaries (or frankly, anything, this is brittle) fails, remove and recreate rootfs - the OS might be left in an unbootable state.
* Reboot AGX into "Recovery Mode" - hold the recovery button and reset button, release simultaneously ((sic) reset first?)
* Connect USB-C cable to the debug port ("front" USB-c)
* Nvidia AGX device should appear in the <code>lsusb</code> under NVIDIA CORP. APX
* Run <code>./flash.sh </code> Different options for different usecases(https://docs.nvidia.com/jetson/archives/r36.3/DeveloperGuide/IN/QuickStart.html#in-quickstart)
Jetson AGX Orin Developer Kit (eMMC):
$ <code>sudo ./flash.sh jetson-agx-orin-devkit internal</code>
* Watch for few minutes, typically it crashes early, then go for lunch.
[[Category: Firmware]]
e6421d3859487babd61235a1ada3dac850a6b28e
1441
1440
2024-06-04T16:02:01Z
Tom
23
/* BSP approach (avoids SDK Manager) */
wikitext
text/x-wiki
Notes on programming/interfacing with Jetson Orin hardware.
=== Upgrading AGX to Jetson Linux 36.3 ===
==== BSP approach (avoids SDK Manager) ====
* Requires Ubuntu 22.04. Very unhappy to work on Gentoo.
* Requires Intel/AMD 64bit CPU.
* Download "Driver Package (BSP)" from [https://developer.nvidia.com/embedded/jetson-linux here]
* Unpack (as root, get used to doing most of this as root), preserving privileges
** <code>tar xjpf ...</code>
* Download "Sample Root Filesystem"
* Unpack (as root..) into rootfs directory inside of the BSP archive above.
* Run <code>sudo ./tools/l4t_flash_prerequisites.sh</code>
* Run <code>./apply_binaries.sh</code> from the BSP
** Note: If apply_binaries (or frankly, anything, this is brittle) fails, remove and recreate rootfs - the OS might be left in an unbootable state.
* Reboot AGX into "Recovery Mode" - hold the recovery button and reset button, release simultaneously ((sic) reset first?)
* Connect USB-C cable to the debug port ("front" USB-c)
* Nvidia AGX device should appear in the <code>lsusb</code> under NVIDIA CORP. APX
* Run <code>./flash.sh </code> Different options for different usecases(https://docs.nvidia.com/jetson/archives/r36.3/DeveloperGuide/IN/QuickStart.html#in-quickstart)
Jetson AGX Orin Developer Kit (eMMC):
<code>$ sudo ./flash.sh jetson-agx-orin-devkit internal</code>
* Watch for few minutes, typically it crashes early, then go for lunch.
[[Category: Firmware]]
05bfbb05718edf35c0cb2a572ce427836def3078
Pollen Robotics
0
317
1454
2024-06-06T00:29:57Z
Admin
1
Created page with "[https://mirsee.com/ Pollen Robotics] is a robotics company based in Bordeaux, France. {{infobox company | name = Pollen Robotics | country = France | website_link = https:/..."
wikitext
text/x-wiki
[https://mirsee.com/ Pollen Robotics] is a robotics company based in Bordeaux, France.
{{infobox company
| name = Pollen Robotics
| country = France
| website_link = https://www.pollen-robotics.com/
| robots = [[Reachy]]
}}
[[Category:Companies]]
44c4b394b648451fc812b3b96ce4325ee84e9c97
1455
1454
2024-06-06T00:30:04Z
Admin
1
wikitext
text/x-wiki
[https://mirsee.com/ Pollen Robotics] is a robotics company based in Bordeaux, France.
{{infobox company
| name = Pollen Robotics
| country = France
| website_link = https://www.pollen-robotics.com/
| robots = [[Reachy]]
}}
[[Category:Companies]]
70f45176cc6c80de5b95aea706e85af85078f62a
Mirsee Robotics
0
194
1456
811
2024-06-06T00:30:09Z
Admin
1
wikitext
text/x-wiki
[https://mirsee.com/ Mirsee Robotics] is a robotics company based in Cambridge, ON, Canada. Along with custom actuators, hands, and other solutions, they have developed two humanoid robots, [[Beomni]] and [[Mirsee]], both of which have wheeled bases.
{{infobox company
| name = Mirsee Robotics
| country = Canada
| website_link = https://mirsee.com/
| robots = [[Beomni]], [[Mirsee]]
}}
[[Category:Companies]]
8c1dc59aeabda52c05ae904ee9554b4e2695861c
Reachy
0
318
1457
2024-06-06T00:32:35Z
Admin
1
Created page with "Beomni is a humanoid robot developed by [[Mirsee Robotics]] for the Beyond Imagination AI company: https://www.beomni.ai/. {{infobox robot | name = Reachy | organization = ..."
wikitext
text/x-wiki
Beomni is a humanoid robot developed by [[Mirsee Robotics]] for the Beyond Imagination AI company: https://www.beomni.ai/.
{{infobox robot
| name = Reachy
| organization = [[Pollen Robotics]]
| video_link = https://www.youtube.com/watch?v=oZxHkp4-DnM
}}
[[Category:Robots]]
972d8feb8b8445b96e107215be1efbe362a76a84
Reachy
0
318
1458
1457
2024-06-06T00:32:46Z
Admin
1
wikitext
text/x-wiki
Reachy is a humanoid robot developed by [[Pollen Robotics]].
{{infobox robot
| name = Reachy
| organization = [[Pollen Robotics]]
| video_link = https://www.youtube.com/watch?v=oZxHkp4-DnM
}}
[[Category:Robots]]
c00be9dac547129ddbabf2dc77ee79d92dbfdd7b
Stompy
0
2
1459
1401
2024-06-06T00:34:59Z
Admin
1
wikitext
text/x-wiki
[[File:Stompy.jpg|right|300px|thumb|Stompy standing up]]
{{infobox robot
| name = Stompy
| organization = [[K-Scale Labs]]
| cost = USD 10,000
}}
Stompy is an open-source humanoid robot developed by [[K-Scale Labs]]. Here are some relevant links:
* [[Stompy To-Do List]]
* [[Stompy Build Guide]]
* [[Gripper History]]
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
=== Conventions ===
The images below show our pin convention for the CAN bus when using various connectors.
<gallery>
Kscale db9 can bus convention.jpg
Kscale phoenix can bus convention.jpg
</gallery>
= Simulation =
For the latest simulation artifacts, see [https://kscale.dev/ the website].
= Artwork =
Here's some art of Stompy!
<gallery>
Stompy 1.png
Stompy 2.png
Stompy 3.png
Stompy 4.png
</gallery>
[[Category:Robots]]
[[Category:Open Source]]
[[Category:K-Scale]]
9cf6215dee47b745206fbcab51f3fb170035bc49
Pollen Robotics
0
317
1460
1455
2024-06-06T00:35:15Z
Admin
1
wikitext
text/x-wiki
[https://mirsee.com/ Pollen Robotics] is a robotics company based in Bordeaux, France.
{{infobox company
| name = Pollen Robotics
| country = France
| website_link = https://www.pollen-robotics.com/
| robots = [[Reachy]]
}}
They have published many open source designs [https://www.pollen-robotics.com/opensource/ here].
[[Category:Companies]]
308e84f5532bf9dd9f34a7d010e442aa0cfa9064
Robot Descriptions List
0
281
1461
1452
2024-06-06T05:55:32Z
Vrtnis
21
/* HRP-4 */
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 34 || MIR-Lola || Munich Institute of Robotics and Machine Intelligence || URDF || [https://github.com/mir-lab/lola_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 35 || HSR || Toyota || URDF || [https://github.com/toyota-research-institute/hsr_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 36 || Pepper 2 || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 37 || BHR-4 || Beijing Institute of Technology || URDF || [https://github.com/bit-bots/bhr4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 38 || Tiago || PAL Robotics || URDF || [https://github.com/pal-robotics/tiago_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 39 || InMoov || Gael Langevin || URDF || [https://github.com/InMoov/inmoov_ros URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 40 || BipedalWalker || OpenAI Gym || URDF || [https://github.com/openai/gym/tree/master/gym/envs/robotics/assets/bipedal_walker URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 41 || BOLT || Istituto Italiano di Tecnologia || URDF || [https://github.com/robotology/icub-main/tree/master/app/robots/bolt URDF] || GPL-2.0 || ✔️ || ✔️ || ✔️
|-
| 42 || HRP-4 || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* GitHub and web searches
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
== Citation ==
<pre>
@misc{humanoids-2024,
title={Robot Descriptions List},
author={K-Scale Humanoids Wiki Contributors},
year={2024},
url={https://humanoids.wiki/w/Robot_Descriptions_List}
}
</pre>
db0c1f0ffdae6b708f9253cfef6a81e6c0e639c3
1462
1461
2024-06-06T05:56:47Z
Vrtnis
21
/* PI-4 */
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 34 || MIR-Lola || Munich Institute of Robotics and Machine Intelligence || URDF || [https://github.com/mir-lab/lola_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 35 || HSR || Toyota || URDF || [https://github.com/toyota-research-institute/hsr_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 36 || Pepper 2 || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 37 || BHR-4 || Beijing Institute of Technology || URDF || [https://github.com/bit-bots/bhr4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 38 || Tiago || PAL Robotics || URDF || [https://github.com/pal-robotics/tiago_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 39 || InMoov || Gael Langevin || URDF || [https://github.com/InMoov/inmoov_ros URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 40 || BipedalWalker || OpenAI Gym || URDF || [https://github.com/openai/gym/tree/master/gym/envs/robotics/assets/bipedal_walker URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 41 || BOLT || Istituto Italiano di Tecnologia || URDF || [https://github.com/robotology/icub-main/tree/master/app/robots/bolt URDF] || GPL-2.0 || ✔️ || ✔️ || ✔️
|-
| 42 || HRP-4 || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 43 || PI4 || PAL Robotics || URDF || [https://github.com/pal-robotics/pi4_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* GitHub and web searches
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
== Citation ==
<pre>
@misc{humanoids-2024,
title={Robot Descriptions List},
author={K-Scale Humanoids Wiki Contributors},
year={2024},
url={https://humanoids.wiki/w/Robot_Descriptions_List}
}
</pre>
1cb6b76b407862bf9604799a66d54900a6a23a5a
1463
1462
2024-06-06T05:58:24Z
Vrtnis
21
/* HRP-7P */
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 34 || MIR-Lola || Munich Institute of Robotics and Machine Intelligence || URDF || [https://github.com/mir-lab/lola_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 35 || HSR || Toyota || URDF || [https://github.com/toyota-research-institute/hsr_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 36 || Pepper 2 || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 37 || BHR-4 || Beijing Institute of Technology || URDF || [https://github.com/bit-bots/bhr4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 38 || Tiago || PAL Robotics || URDF || [https://github.com/pal-robotics/tiago_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 39 || InMoov || Gael Langevin || URDF || [https://github.com/InMoov/inmoov_ros URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 40 || BipedalWalker || OpenAI Gym || URDF || [https://github.com/openai/gym/tree/master/gym/envs/robotics/assets/bipedal_walker URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 41 || BOLT || Istituto Italiano di Tecnologia || URDF || [https://github.com/robotology/icub-main/tree/master/app/robots/bolt URDF] || GPL-2.0 || ✔️ || ✔️ || ✔️
|-
| 42 || HRP-4 || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 43 || PI4 || PAL Robotics || URDF || [https://github.com/pal-robotics/pi4_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 44 || HRP-7P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp7p_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* GitHub and web searches
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
== Citation ==
<pre>
@misc{humanoids-2024,
title={Robot Descriptions List},
author={K-Scale Humanoids Wiki Contributors},
year={2024},
url={https://humanoids.wiki/w/Robot_Descriptions_List}
}
</pre>
9c9918612d4204052247c6720a32707b1a4b5fc7
1464
1463
2024-06-06T21:32:00Z
Vrtnis
21
/*Add Juno*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 34 || MIR-Lola || Munich Institute of Robotics and Machine Intelligence || URDF || [https://github.com/mir-lab/lola_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 35 || HSR || Toyota || URDF || [https://github.com/toyota-research-institute/hsr_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 36 || Pepper 2 || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 37 || BHR-4 || Beijing Institute of Technology || URDF || [https://github.com/bit-bots/bhr4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 38 || Tiago || PAL Robotics || URDF || [https://github.com/pal-robotics/tiago_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 39 || InMoov || Gael Langevin || URDF || [https://github.com/InMoov/inmoov_ros URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 40 || BipedalWalker || OpenAI Gym || URDF || [https://github.com/openai/gym/tree/master/gym/envs/robotics/assets/bipedal_walker URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 41 || BOLT || Istituto Italiano di Tecnologia || URDF || [https://github.com/robotology/icub-main/tree/master/app/robots/bolt URDF] || GPL-2.0 || ✔️ || ✔️ || ✔️
|-
| 42 || HRP-4 || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 43 || PI4 || PAL Robotics || URDF || [https://github.com/pal-robotics/pi4_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 44 || HRP-7P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp7p_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 45 || Juno || UC Berkeley || URDF || [https://github.com/BerkeleyAutomation/juno_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* GitHub and web searches
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
== Citation ==
<pre>
@misc{humanoids-2024,
title={Robot Descriptions List},
author={K-Scale Humanoids Wiki Contributors},
year={2024},
url={https://humanoids.wiki/w/Robot_Descriptions_List}
}
</pre>
04dc6bb44b8fcacde48079c79e9aac7e008548f6
1465
1464
2024-06-06T21:32:35Z
Vrtnis
21
/* Add Leo and Apollo */
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 34 || MIR-Lola || Munich Institute of Robotics and Machine Intelligence || URDF || [https://github.com/mir-lab/lola_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 35 || HSR || Toyota || URDF || [https://github.com/toyota-research-institute/hsr_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 36 || Pepper 2 || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 37 || BHR-4 || Beijing Institute of Technology || URDF || [https://github.com/bit-bots/bhr4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 38 || Tiago || PAL Robotics || URDF || [https://github.com/pal-robotics/tiago_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 39 || InMoov || Gael Langevin || URDF || [https://github.com/InMoov/inmoov_ros URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 40 || BipedalWalker || OpenAI Gym || URDF || [https://github.com/openai/gym/tree/master/gym/envs/robotics/assets/bipedal_walker URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 41 || BOLT || Istituto Italiano di Tecnologia || URDF || [https://github.com/robotology/icub-main/tree/master/app/robots/bolt URDF] || GPL-2.0 || ✔️ || ✔️ || ✔️
|-
| 42 || HRP-4 || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 43 || PI4 || PAL Robotics || URDF || [https://github.com/pal-robotics/pi4_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 44 || HRP-7P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp7p_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 45 || Juno || UC Berkeley || URDF || [https://github.com/BerkeleyAutomation/juno_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 46 || Leo || Georgia Institute of Technology || URDF || [https://github.com/GeorgiaTechRobotLearning/leo_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 47 || Apollo || University of Pennsylvania || URDF || [https://github.com/penn-robotics/apollo_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* GitHub and web searches
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
== Citation ==
<pre>
@misc{humanoids-2024,
title={Robot Descriptions List},
author={K-Scale Humanoids Wiki Contributors},
year={2024},
url={https://humanoids.wiki/w/Robot_Descriptions_List}
}
</pre>
cdf19c0dfc2b5d4fa4204b28710fad014349c364
1507
1465
2024-06-11T19:26:20Z
Vrtnis
21
/*Scuttle*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 34 || MIR-Lola || Munich Institute of Robotics and Machine Intelligence || URDF || [https://github.com/mir-lab/lola_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 35 || HSR || Toyota || URDF || [https://github.com/toyota-research-institute/hsr_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 36 || Pepper 2 || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 37 || BHR-4 || Beijing Institute of Technology || URDF || [https://github.com/bit-bots/bhr4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 38 || Tiago || PAL Robotics || URDF || [https://github.com/pal-robotics/tiago_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 39 || InMoov || Gael Langevin || URDF || [https://github.com/InMoov/inmoov_ros URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 40 || BipedalWalker || OpenAI Gym || URDF || [https://github.com/openai/gym/tree/master/gym/envs/robotics/assets/bipedal_walker URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 41 || BOLT || Istituto Italiano di Tecnologia || URDF || [https://github.com/robotology/icub-main/tree/master/app/robots/bolt URDF] || GPL-2.0 || ✔️ || ✔️ || ✔️
|-
| 42 || HRP-4 || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 43 || PI4 || PAL Robotics || URDF || [https://github.com/pal-robotics/pi4_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 44 || HRP-7P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp7p_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 45 || Juno || UC Berkeley || URDF || [https://github.com/BerkeleyAutomation/juno_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 46 || Leo || Georgia Institute of Technology || URDF || [https://github.com/GeorgiaTechRobotLearning/leo_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 47 || Apollo || University of Pennsylvania || URDF || [https://github.com/penn-robotics/apollo_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 48 || Scuttle || Open Robotics || URDF || [https://github.com/openrobotics/scuttle_description URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* GitHub and web searches
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
== Citation ==
<pre>
@misc{humanoids-2024,
title={Robot Descriptions List},
author={K-Scale Humanoids Wiki Contributors},
year={2024},
url={https://humanoids.wiki/w/Robot_Descriptions_List}
}
</pre>
eac2b245114e17402b638f7016b338aa53e7f799
Pose Estimation
0
319
1466
2024-06-06T21:34:32Z
Vrtnis
21
/*Add Pose Estimation Overview*/
wikitext
text/x-wiki
'''Pose estimation''' is a computer vision technique that predicts the configuration of a person's or object's joints or parts in an image or video.
It involves detecting and tracking the position and orientation of these parts, usually represented as keypoints.
Pose estimation is widely used in applications such as motion capture, human-computer interaction, augmented reality, and robotics. The process typically involves training machine learning models on large datasets of annotated images to accurately identify and locate the keypoints.
These models can range from simple algorithms for 2D pose estimation to more complex systems that infer 3D poses. Recent advances in deep learning have significantly improved the accuracy and robustness of pose estimation systems, enabling their use in real-time applications.
708c04cfa0086b79906bb1e353806ca6df0aa7eb
1467
1466
2024-06-06T21:37:59Z
Vrtnis
21
/*Add Mediapipe*/
wikitext
text/x-wiki
'''Pose estimation''' is a computer vision technique that predicts the configuration of a person's or object's joints or parts in an image or video.
It involves detecting and tracking the position and orientation of these parts, usually represented as keypoints.
Pose estimation is widely used in applications such as motion capture, human-computer interaction, augmented reality, and robotics. The process typically involves training machine learning models on large datasets of annotated images to accurately identify and locate the keypoints.
These models can range from simple algorithms for 2D pose estimation to more complex systems that infer 3D poses. Recent advances in deep learning have significantly improved the accuracy and robustness of pose estimation systems, enabling their use in real-time applications.
'''MediaPipe''' is an advanced computer vision tool developed by Google, designed to accurately identify and track human poses in real-time. MediaPipe leverages machine learning to detect and map out keypoints on the human body, such as the elbows, knees, and shoulders, providing a detailed understanding of body posture and movements.
9be84cec7f9d20e79409756d35cd45bfe9fad59a
1468
1467
2024-06-07T01:49:20Z
Vrtnis
21
/*Add OpenPose*/
wikitext
text/x-wiki
{| class="wikitable sortable"
|-
! Sr No !! Model !! Developer !! Key Points !! Source !! License
|-
| 1 || OpenPose || Carnegie Mellon University || Detecting key points of the human body, including hand, facial, and foot || [https://github.com/CMU-Perceptual-Computing-Lab/openpose OpenPose GitHub] || MIT
|}
fa11b6aafe30053b5827ce5368d3da68c47b82eb
1469
1468
2024-06-07T01:50:00Z
Vrtnis
21
wikitext
text/x-wiki
Pose estimation is a computer vision technique that predicts the configuration of a person's or object's joints or parts in an image or video.
It involves detecting and tracking the position and orientation of these parts, usually represented as keypoints.
Pose estimation is widely used in applications such as motion capture, human-computer interaction, augmented reality, and robotics. The process typically involves training machine learning models on large datasets of annotated images to accurately identify and locate the keypoints.
These models can range from simple algorithms for 2D pose estimation to more complex systems that infer 3D poses. Recent advances in deep learning have significantly improved the accuracy and robustness of pose estimation systems, enabling their use in real-time applications.
{| class="wikitable sortable"
|-
! Sr No !! Model !! Developer !! Key Points !! Source !! License
|-
| 1 || OpenPose || Carnegie Mellon University || Detecting key points of the human body, including hand, facial, and foot || [https://github.com/CMU-Perceptual-Computing-Lab/openpose OpenPose GitHub] || MIT
|}
654ff331da09fddc9b2906ca22a6e26f7d0640a8
1470
1469
2024-06-07T01:50:22Z
Vrtnis
21
wikitext
text/x-wiki
Pose estimation is a computer vision technique that predicts the configuration of a person's or object's joints or parts in an image or video.
It involves detecting and tracking the position and orientation of these parts, usually represented as keypoints.
Pose estimation is widely used in applications such as motion capture, human-computer interaction, augmented reality, and robotics. The process typically involves training machine learning models on large datasets of annotated images to accurately identify and locate the keypoints.
{| class="wikitable sortable"
|-
! Sr No !! Model !! Developer !! Key Points !! Source !! License
|-
| 1 || OpenPose || Carnegie Mellon University || Detecting key points of the human body, including hand, facial, and foot || [https://github.com/CMU-Perceptual-Computing-Lab/openpose OpenPose GitHub] || MIT
|}
a939b8c4fa2caf38643ba2844e75ac1c5cf79a25
1471
1470
2024-06-07T01:55:55Z
Vrtnis
21
wikitext
text/x-wiki
Pose estimation is a computer vision technique that predicts the configuration of a person's or object's joints or parts in an image or video.
It involves detecting and tracking the position and orientation of these parts, usually represented as keypoints.
Pose estimation is widely used in applications such as motion capture, human-computer interaction, augmented reality, and robotics. The process typically involves training machine learning models on large datasets of annotated images to accurately identify and locate the keypoints.
=== Pose Estimation Related Models ===
{| class="wikitable sortable"
|-
! Sr No !! Model !! Developer !! Key Points !! Source !! License
|-
| 1 || OpenPose || Carnegie Mellon University || Detecting key points of the human body, including hand, facial, and foot || [https://github.com/CMU-Perceptual-Computing-Lab/openpose OpenPose GitHub] || MIT
|}
de9436a67134f4369a58e972bf5a40b15cff09cb
1472
1471
2024-06-07T01:56:31Z
Vrtnis
21
/* Add MoveNet */
wikitext
text/x-wiki
Pose estimation is a computer vision technique that predicts the configuration of a person's or object's joints or parts in an image or video.
It involves detecting and tracking the position and orientation of these parts, usually represented as keypoints.
Pose estimation is widely used in applications such as motion capture, human-computer interaction, augmented reality, and robotics. The process typically involves training machine learning models on large datasets of annotated images to accurately identify and locate the keypoints.
=== Pose Estimation Related Models ===
{| class="wikitable sortable"
|-
! Sr No !! Model !! Developer !! Key Points !! Source !! License
|-
| 1 || OpenPose || Carnegie Mellon University || Detecting key points of the human body, including hand, facial, and foot || [https://github.com/CMU-Perceptual-Computing-Lab/openpose OpenPose GitHub] || MIT
|-
| 2 || MoveNet || Google Research || Detecting 17 critical key points of the human body || [https://github.com/tensorflow/tfjs-models/tree/master/posenet MoveNet GitHub] || Apache 2.0
|}
d158e6ff80e610698aa3e688b34a5a621276d78c
1473
1472
2024-06-07T04:51:42Z
Vrtnis
21
/*MediaPipe and Detectron2 */
wikitext
text/x-wiki
Pose estimation is a computer vision technique that predicts the configuration of a person's or object's joints or parts in an image or video.
It involves detecting and tracking the position and orientation of these parts, usually represented as keypoints.
Pose estimation is widely used in applications such as motion capture, human-computer interaction, augmented reality, and robotics. The process typically involves training machine learning models on large datasets of annotated images to accurately identify and locate the keypoints.
=== Pose Estimation Related Models ===
{| class="wikitable sortable"
|-
! Sr No !! Model !! Developer !! Key Points !! Source !! License
|-
| 1 || MediaPipe || Google || Tracking 33 key points on the human body, offering cross-platform, customizable ML solutions || [ https://github.com/google/mediapipe MediaPipe GitHub] || Apache 2.0 || ✔️ || ✔️ || ✔️
|-
| 2 || Detectron2 || Facebook AI Research || High-performance codebase for object detection and segmentation, including pose estimation || [ https://github.com/facebookresearch/detectron2 Detectron2 GitHub] || Apache 2.0 || ✔️ || ✔️ || ✔️
|-
| 3 || OpenPose || Carnegie Mellon University || Detecting key points of the human body, including hand, facial, and foot || [https://github.com/CMU-Perceptual-Computing-Lab/openpose OpenPose GitHub] || MIT
|-
| 4 || MoveNet || Google Research || Detecting 17 critical key points of the human body || [https://github.com/tensorflow/tfjs-models/tree/master/posenet MoveNet GitHub] || Apache 2.0
|}
32f8c5992ea2abb52ad71689a859616f9622782c
1474
1473
2024-06-07T04:52:39Z
Vrtnis
21
/* Pose Estimation Related Models */
wikitext
text/x-wiki
Pose estimation is a computer vision technique that predicts the configuration of a person's or object's joints or parts in an image or video.
It involves detecting and tracking the position and orientation of these parts, usually represented as keypoints.
Pose estimation is widely used in applications such as motion capture, human-computer interaction, augmented reality, and robotics. The process typically involves training machine learning models on large datasets of annotated images to accurately identify and locate the keypoints.
=== Pose Estimation Related Models ===
{| class="wikitable sortable"
|-
! Sr No !! Model !! Developer !! Key Points !! Source !! License
|-
| 1 || MediaPipe || Google || Tracking 33 key points on the human body, offering cross-platform, customizable ML solutions || [https://github.com/google/mediapipe MediaPipe GitHub] || Apache 2.0
|-
| 2 || Detectron2 || Facebook AI Research || High-performance codebase for object detection and segmentation, including pose estimation || [https://github.com/facebookresearch/detectron2 Detectron2 GitHub] || Apache 2.0
|-
| 3 || OpenPose || Carnegie Mellon University || Detecting key points of the human body, including hand, facial, and foot || [https://github.com/CMU-Perceptual-Computing-Lab/openpose OpenPose GitHub] || MIT
|-
| 4 || MoveNet || Google Research || Detecting 17 critical key points of the human body || [https://github.com/tensorflow/tfjs-models/tree/master/posenet MoveNet GitHub] || Apache 2.0
|}
91fccfeca84b99b28b80a2be8dd37f814d726180
1475
1474
2024-06-07T05:01:52Z
Vrtnis
21
/* Pose Estimation Related Models */
wikitext
text/x-wiki
Pose estimation is a computer vision technique that predicts the configuration of a person's or object's joints or parts in an image or video.
It involves detecting and tracking the position and orientation of these parts, usually represented as keypoints.
Pose estimation is widely used in applications such as motion capture, human-computer interaction, augmented reality, and robotics. The process typically involves training machine learning models on large datasets of annotated images to accurately identify and locate the keypoints.
=== Pose Estimation Related Models ===
{| class="wikitable sortable"
|-
! Sr No !! Model !! Developer !! Key Points !! Source !! License
|-
| 1 || MediaPipe || Google || Tracking 33 key points on the human body, offering cross-platform, customizable ML solutions || [https://github.com/google/mediapipe MediaPipe GitHub] || Apache 2.0
|-
| 2 || Detectron2 || Facebook AI Research || High-performance codebase for object detection and segmentation, including pose estimation || [https://github.com/facebookresearch/detectron2 Detectron2 GitHub] || Apache 2.0
|-
| 3 || OpenPose || Carnegie Mellon University || Detecting key points of the human body, including hand, facial, and foot || [https://github.com/CMU-Perceptual-Computing-Lab/openpose OpenPose GitHub] || MIT
|-
| 4 || MoveNet || Google Research || Detecting 17 critical key points of the human body || [https://github.com/tensorflow/tfjs-models/tree/master/posenet MoveNet GitHub] || Apache 2.0
|-
| 5 || PoseNet || Google Research || Detecting different body parts, providing comprehensive skeletal information || [https://github.com/tensorflow/tfjs-models/tree/master/posenet PoseNet GitHub] || Apache 2.0
|-
| 6 || DCPose || Deep Dual Consecutive Network || Detecting human pose from multiple frames, addressing motion blur and occlusions || [https://github.com/DeepDualConsecutivePose/dcpose DCPose GitHub] || MIT
|-
| 7 || DensePose || Facebook AI Research || Mapping human-based pixels from an RGB image to the 3D surface of a human body || [https://github.com/facebookresearch/DensePose DensePose GitHub] || Apache 2.0
|-
| 8 || HigherHRNet || HRNet || Addressing scaling differences in pose prediction, especially for shorter people || [https://github.com/HRNet/HigherHRNet HigherHRNet GitHub] || MIT
|-
| 9 || Lightweight OpenPose || Daniil-Osokin || Real-time inference with minimal accuracy drop, detecting human poses through key points || [https://github.com/Daniil-Osokin/lightweight-human-pose-estimation Lightweight OpenPose GitHub] || MIT
|-
| 10 || AlphaPose || MVIG-SJTU || Detecting multiple individuals in various scenes, achieving high mAP on COCO and MPII datasets || [https://github.com/MVIG-SJTU/AlphaPose AlphaPose GitHub] || MIT
|}
e6cdecf2ea318650d2baf3648aa2a6cb5279cfe5
1477
1475
2024-06-07T05:38:08Z
Vrtnis
21
/*Add mediapipe detection*/
wikitext
text/x-wiki
Pose estimation is a computer vision technique that predicts the configuration of a person's or object's joints or parts in an image or video.
It involves detecting and tracking the position and orientation of these parts, usually represented as keypoints.
Pose estimation is widely used in applications such as motion capture, human-computer interaction, augmented reality, and robotics. The process typically involves training machine learning models on large datasets of annotated images to accurately identify and locate the keypoints.
=== Pose Estimation Related Models ===
{| class="wikitable sortable"
|-
! Sr No !! Model !! Developer !! Key Points !! Source !! License
|-
| 1 || MediaPipe || Google || Tracking 33 key points on the human body, offering cross-platform, customizable ML solutions || [https://github.com/google/mediapipe MediaPipe GitHub] || Apache 2.0
|-
| 2 || Detectron2 || Facebook AI Research || High-performance codebase for object detection and segmentation, including pose estimation || [https://github.com/facebookresearch/detectron2 Detectron2 GitHub] || Apache 2.0
|-
| 3 || OpenPose || Carnegie Mellon University || Detecting key points of the human body, including hand, facial, and foot || [https://github.com/CMU-Perceptual-Computing-Lab/openpose OpenPose GitHub] || MIT
|-
| 4 || MoveNet || Google Research || Detecting 17 critical key points of the human body || [https://github.com/tensorflow/tfjs-models/tree/master/posenet MoveNet GitHub] || Apache 2.0
|-
| 5 || PoseNet || Google Research || Detecting different body parts, providing comprehensive skeletal information || [https://github.com/tensorflow/tfjs-models/tree/master/posenet PoseNet GitHub] || Apache 2.0
|-
| 6 || DCPose || Deep Dual Consecutive Network || Detecting human pose from multiple frames, addressing motion blur and occlusions || [https://github.com/DeepDualConsecutivePose/dcpose DCPose GitHub] || MIT
|-
| 7 || DensePose || Facebook AI Research || Mapping human-based pixels from an RGB image to the 3D surface of a human body || [https://github.com/facebookresearch/DensePose DensePose GitHub] || Apache 2.0
|-
| 8 || HigherHRNet || HRNet || Addressing scaling differences in pose prediction, especially for shorter people || [https://github.com/HRNet/HigherHRNet HigherHRNet GitHub] || MIT
|-
| 9 || Lightweight OpenPose || Daniil-Osokin || Real-time inference with minimal accuracy drop, detecting human poses through key points || [https://github.com/Daniil-Osokin/lightweight-human-pose-estimation Lightweight OpenPose GitHub] || MIT
|-
| 10 || AlphaPose || MVIG-SJTU || Detecting multiple individuals in various scenes, achieving high mAP on COCO and MPII datasets || [https://github.com/MVIG-SJTU/AlphaPose AlphaPose GitHub] || MIT
|}
[[File:Pose detection overlay.gif|720px|thumb|Mediapipe Pose Detection]]
6330014352f7b417d0d3787511a7253a23dbb2e9
1479
1477
2024-06-07T05:43:53Z
Vrtnis
21
wikitext
text/x-wiki
Pose estimation is a computer vision technique that predicts the configuration of a person's or object's joints or parts in an image or video.
It involves detecting and tracking the position and orientation of these parts, usually represented as keypoints.
Pose estimation is widely used in applications such as motion capture, human-computer interaction, augmented reality, and robotics. The process typically involves training machine learning models on large datasets of annotated images to accurately identify and locate the keypoints.
=== Pose Estimation Related Models ===
{| class="wikitable sortable"
|-
! Sr No !! Model !! Developer !! Key Points !! Source !! License
|-
| 1 || MediaPipe || Google || Tracking 33 key points on the human body, offering cross-platform, customizable ML solutions || [https://github.com/google/mediapipe MediaPipe GitHub] || Apache 2.0
|-
| 2 || Detectron2 || Facebook AI Research || High-performance codebase for object detection and segmentation, including pose estimation || [https://github.com/facebookresearch/detectron2 Detectron2 GitHub] || Apache 2.0
|-
| 3 || OpenPose || Carnegie Mellon University || Detecting key points of the human body, including hand, facial, and foot || [https://github.com/CMU-Perceptual-Computing-Lab/openpose OpenPose GitHub] || MIT
|-
| 4 || MoveNet || Google Research || Detecting 17 critical key points of the human body || [https://github.com/tensorflow/tfjs-models/tree/master/posenet MoveNet GitHub] || Apache 2.0
|-
| 5 || PoseNet || Google Research || Detecting different body parts, providing comprehensive skeletal information || [https://github.com/tensorflow/tfjs-models/tree/master/posenet PoseNet GitHub] || Apache 2.0
|-
| 6 || DCPose || Deep Dual Consecutive Network || Detecting human pose from multiple frames, addressing motion blur and occlusions || [https://github.com/DeepDualConsecutivePose/dcpose DCPose GitHub] || MIT
|-
| 7 || DensePose || Facebook AI Research || Mapping human-based pixels from an RGB image to the 3D surface of a human body || [https://github.com/facebookresearch/DensePose DensePose GitHub] || Apache 2.0
|-
| 8 || HigherHRNet || HRNet || Addressing scaling differences in pose prediction, especially for shorter people || [https://github.com/HRNet/HigherHRNet HigherHRNet GitHub] || MIT
|-
| 9 || Lightweight OpenPose || Daniil-Osokin || Real-time inference with minimal accuracy drop, detecting human poses through key points || [https://github.com/Daniil-Osokin/lightweight-human-pose-estimation Lightweight OpenPose GitHub] || MIT
|-
| 10 || AlphaPose || MVIG-SJTU || Detecting multiple individuals in various scenes, achieving high mAP on COCO and MPII datasets || [https://github.com/MVIG-SJTU/AlphaPose AlphaPose GitHub] || MIT
|}
[[File:Pose detection overlay.gif|720px|thumb|Mediapipe Pose Detection]]
[[File:Poseoutput.gif|720px|thumb|Mediapipe Pose Detection]]
a4116bb6c11e770ae1357e414fd43cfffda37fdc
1480
1479
2024-06-07T05:44:28Z
Vrtnis
21
wikitext
text/x-wiki
Pose estimation is a computer vision technique that predicts the configuration of a person's or object's joints or parts in an image or video.
It involves detecting and tracking the position and orientation of these parts, usually represented as keypoints.
Pose estimation is widely used in applications such as motion capture, human-computer interaction, augmented reality, and robotics. The process typically involves training machine learning models on large datasets of annotated images to accurately identify and locate the keypoints.
=== Pose Estimation Related Models ===
{| class="wikitable sortable"
|-
! Sr No !! Model !! Developer !! Key Points !! Source !! License
|-
| 1 || MediaPipe || Google || Tracking 33 key points on the human body, offering cross-platform, customizable ML solutions || [https://github.com/google/mediapipe MediaPipe GitHub] || Apache 2.0
|-
| 2 || Detectron2 || Facebook AI Research || High-performance codebase for object detection and segmentation, including pose estimation || [https://github.com/facebookresearch/detectron2 Detectron2 GitHub] || Apache 2.0
|-
| 3 || OpenPose || Carnegie Mellon University || Detecting key points of the human body, including hand, facial, and foot || [https://github.com/CMU-Perceptual-Computing-Lab/openpose OpenPose GitHub] || MIT
|-
| 4 || MoveNet || Google Research || Detecting 17 critical key points of the human body || [https://github.com/tensorflow/tfjs-models/tree/master/posenet MoveNet GitHub] || Apache 2.0
|-
| 5 || PoseNet || Google Research || Detecting different body parts, providing comprehensive skeletal information || [https://github.com/tensorflow/tfjs-models/tree/master/posenet PoseNet GitHub] || Apache 2.0
|-
| 6 || DCPose || Deep Dual Consecutive Network || Detecting human pose from multiple frames, addressing motion blur and occlusions || [https://github.com/DeepDualConsecutivePose/dcpose DCPose GitHub] || MIT
|-
| 7 || DensePose || Facebook AI Research || Mapping human-based pixels from an RGB image to the 3D surface of a human body || [https://github.com/facebookresearch/DensePose DensePose GitHub] || Apache 2.0
|-
| 8 || HigherHRNet || HRNet || Addressing scaling differences in pose prediction, especially for shorter people || [https://github.com/HRNet/HigherHRNet HigherHRNet GitHub] || MIT
|-
| 9 || Lightweight OpenPose || Daniil-Osokin || Real-time inference with minimal accuracy drop, detecting human poses through key points || [https://github.com/Daniil-Osokin/lightweight-human-pose-estimation Lightweight OpenPose GitHub] || MIT
|-
| 10 || AlphaPose || MVIG-SJTU || Detecting multiple individuals in various scenes, achieving high mAP on COCO and MPII datasets || [https://github.com/MVIG-SJTU/AlphaPose AlphaPose GitHub] || MIT
|}
[[File:Pose detection overlay.gif|720px|thumb|Mediapipe Pose Detection]]
6330014352f7b417d0d3787511a7253a23dbb2e9
1485
1480
2024-06-07T05:50:50Z
Vrtnis
21
wikitext
text/x-wiki
Pose estimation is a computer vision technique that predicts the configuration of a person's or object's joints or parts in an image or video.
It involves detecting and tracking the position and orientation of these parts, usually represented as keypoints.
Pose estimation is widely used in applications such as motion capture, human-computer interaction, augmented reality, and robotics. The process typically involves training machine learning models on large datasets of annotated images to accurately identify and locate the keypoints.
=== Pose Estimation Related Models ===
{| class="wikitable sortable"
|-
! Sr No !! Model !! Developer !! Key Points !! Source !! License
|-
| 1 || MediaPipe || Google || Tracking 33 key points on the human body, offering cross-platform, customizable ML solutions || [https://github.com/google/mediapipe MediaPipe GitHub] || Apache 2.0
|-
| 2 || Detectron2 || Facebook AI Research || High-performance codebase for object detection and segmentation, including pose estimation || [https://github.com/facebookresearch/detectron2 Detectron2 GitHub] || Apache 2.0
|-
| 3 || OpenPose || Carnegie Mellon University || Detecting key points of the human body, including hand, facial, and foot || [https://github.com/CMU-Perceptual-Computing-Lab/openpose OpenPose GitHub] || MIT
|-
| 4 || MoveNet || Google Research || Detecting 17 critical key points of the human body || [https://github.com/tensorflow/tfjs-models/tree/master/posenet MoveNet GitHub] || Apache 2.0
|-
| 5 || PoseNet || Google Research || Detecting different body parts, providing comprehensive skeletal information || [https://github.com/tensorflow/tfjs-models/tree/master/posenet PoseNet GitHub] || Apache 2.0
|-
| 6 || DCPose || Deep Dual Consecutive Network || Detecting human pose from multiple frames, addressing motion blur and occlusions || [https://github.com/DeepDualConsecutivePose/dcpose DCPose GitHub] || MIT
|-
| 7 || DensePose || Facebook AI Research || Mapping human-based pixels from an RGB image to the 3D surface of a human body || [https://github.com/facebookresearch/DensePose DensePose GitHub] || Apache 2.0
|-
| 8 || HigherHRNet || HRNet || Addressing scaling differences in pose prediction, especially for shorter people || [https://github.com/HRNet/HigherHRNet HigherHRNet GitHub] || MIT
|-
| 9 || Lightweight OpenPose || Daniil-Osokin || Real-time inference with minimal accuracy drop, detecting human poses through key points || [https://github.com/Daniil-Osokin/lightweight-human-pose-estimation Lightweight OpenPose GitHub] || MIT
|-
| 10 || AlphaPose || MVIG-SJTU || Detecting multiple individuals in various scenes, achieving high mAP on COCO and MPII datasets || [https://github.com/MVIG-SJTU/AlphaPose AlphaPose GitHub] || MIT
|}
[[File:Pose detection overlay.gif|720px|thumb|Mediapipe Pose Detection]]
<gallery>
Pose_example1.png|About to Stand
Pose_example2.png|Standing but error in leg detection
</gallery>
edf7a9df84d8340a37d96653323320eabde30a2f
1486
1485
2024-06-07T05:51:56Z
Vrtnis
21
wikitext
text/x-wiki
Pose estimation is a computer vision technique that predicts the configuration of a person's or object's joints or parts in an image or video.
It involves detecting and tracking the position and orientation of these parts, usually represented as keypoints.
Pose estimation is widely used in applications such as motion capture, human-computer interaction, augmented reality, and robotics. The process typically involves training machine learning models on large datasets of annotated images to accurately identify and locate the keypoints.
=== Pose Estimation Related Models ===
{| class="wikitable sortable"
|-
! Sr No !! Model !! Developer !! Key Points !! Source !! License
|-
| 1 || MediaPipe || Google || Tracking 33 key points on the human body, offering cross-platform, customizable ML solutions || [https://github.com/google/mediapipe MediaPipe GitHub] || Apache 2.0
|-
| 2 || Detectron2 || Facebook AI Research || High-performance codebase for object detection and segmentation, including pose estimation || [https://github.com/facebookresearch/detectron2 Detectron2 GitHub] || Apache 2.0
|-
| 3 || OpenPose || Carnegie Mellon University || Detecting key points of the human body, including hand, facial, and foot || [https://github.com/CMU-Perceptual-Computing-Lab/openpose OpenPose GitHub] || MIT
|-
| 4 || MoveNet || Google Research || Detecting 17 critical key points of the human body || [https://github.com/tensorflow/tfjs-models/tree/master/posenet MoveNet GitHub] || Apache 2.0
|-
| 5 || PoseNet || Google Research || Detecting different body parts, providing comprehensive skeletal information || [https://github.com/tensorflow/tfjs-models/tree/master/posenet PoseNet GitHub] || Apache 2.0
|-
| 6 || DCPose || Deep Dual Consecutive Network || Detecting human pose from multiple frames, addressing motion blur and occlusions || [https://github.com/DeepDualConsecutivePose/dcpose DCPose GitHub] || MIT
|-
| 7 || DensePose || Facebook AI Research || Mapping human-based pixels from an RGB image to the 3D surface of a human body || [https://github.com/facebookresearch/DensePose DensePose GitHub] || Apache 2.0
|-
| 8 || HigherHRNet || HRNet || Addressing scaling differences in pose prediction, especially for shorter people || [https://github.com/HRNet/HigherHRNet HigherHRNet GitHub] || MIT
|-
| 9 || Lightweight OpenPose || Daniil-Osokin || Real-time inference with minimal accuracy drop, detecting human poses through key points || [https://github.com/Daniil-Osokin/lightweight-human-pose-estimation Lightweight OpenPose GitHub] || MIT
|-
| 10 || AlphaPose || MVIG-SJTU || Detecting multiple individuals in various scenes, achieving high mAP on COCO and MPII datasets || [https://github.com/MVIG-SJTU/AlphaPose AlphaPose GitHub] || MIT
|}
[[File:Pose detection overlay.gif|720px|thumb|Mediapipe Pose Detection]]
<gallery>
Pose_example1.png|About to Stand
Pose_example2.png|Standing but error in leg detection
Pose_example3.png|Standing but error in leg detection
Pose_example4.png|Standing but error in leg detection
</gallery>
71e3aa93c50fbb326f1f40b468f543e7496ee7bd
1487
1486
2024-06-07T05:53:04Z
Vrtnis
21
wikitext
text/x-wiki
Pose estimation is a computer vision technique that predicts the configuration of a person's or object's joints or parts in an image or video.
It involves detecting and tracking the position and orientation of these parts, usually represented as keypoints.
Pose estimation is widely used in applications such as motion capture, human-computer interaction, augmented reality, and robotics. The process typically involves training machine learning models on large datasets of annotated images to accurately identify and locate the keypoints.
=== Pose Estimation Related Models ===
{| class="wikitable sortable"
|-
! Sr No !! Model !! Developer !! Key Points !! Source !! License
|-
| 1 || MediaPipe || Google || Tracking 33 key points on the human body, offering cross-platform, customizable ML solutions || [https://github.com/google/mediapipe MediaPipe GitHub] || Apache 2.0
|-
| 2 || Detectron2 || Facebook AI Research || High-performance codebase for object detection and segmentation, including pose estimation || [https://github.com/facebookresearch/detectron2 Detectron2 GitHub] || Apache 2.0
|-
| 3 || OpenPose || Carnegie Mellon University || Detecting key points of the human body, including hand, facial, and foot || [https://github.com/CMU-Perceptual-Computing-Lab/openpose OpenPose GitHub] || MIT
|-
| 4 || MoveNet || Google Research || Detecting 17 critical key points of the human body || [https://github.com/tensorflow/tfjs-models/tree/master/posenet MoveNet GitHub] || Apache 2.0
|-
| 5 || PoseNet || Google Research || Detecting different body parts, providing comprehensive skeletal information || [https://github.com/tensorflow/tfjs-models/tree/master/posenet PoseNet GitHub] || Apache 2.0
|-
| 6 || DCPose || Deep Dual Consecutive Network || Detecting human pose from multiple frames, addressing motion blur and occlusions || [https://github.com/DeepDualConsecutivePose/dcpose DCPose GitHub] || MIT
|-
| 7 || DensePose || Facebook AI Research || Mapping human-based pixels from an RGB image to the 3D surface of a human body || [https://github.com/facebookresearch/DensePose DensePose GitHub] || Apache 2.0
|-
| 8 || HigherHRNet || HRNet || Addressing scaling differences in pose prediction, especially for shorter people || [https://github.com/HRNet/HigherHRNet HigherHRNet GitHub] || MIT
|-
| 9 || Lightweight OpenPose || Daniil-Osokin || Real-time inference with minimal accuracy drop, detecting human poses through key points || [https://github.com/Daniil-Osokin/lightweight-human-pose-estimation Lightweight OpenPose GitHub] || MIT
|-
| 10 || AlphaPose || MVIG-SJTU || Detecting multiple individuals in various scenes, achieving high mAP on COCO and MPII datasets || [https://github.com/MVIG-SJTU/AlphaPose AlphaPose GitHub] || MIT
|}
[[File:Pose detection overlay.gif|720px|thumb|Mediapipe Pose Detection]]
<gallery>
Pose_example1.png|About to Stand
Pose_example2.png|Standing but error in leg detection
Pose_example3.png|Foreground missed
Pose_example4.png|Hoodie
</gallery>
af233217b14dc3e4f2e644d7213965757c8e0a5f
1489
1487
2024-06-07T06:47:54Z
Vrtnis
21
wikitext
text/x-wiki
Pose estimation is a computer vision technique that predicts the configuration of a person's or object's joints or parts in an image or video.
It involves detecting and tracking the position and orientation of these parts, usually represented as keypoints.
Pose estimation is widely used in applications such as motion capture, human-computer interaction, augmented reality, and robotics. The process typically involves training machine learning models on large datasets of annotated images to accurately identify and locate the keypoints.
=== Pose Estimation Related Models ===
{| class="wikitable sortable"
|-
! Sr No !! Model !! Developer !! Key Points !! Source !! License
|-
| 1 || MediaPipe || Google || Tracking 33 key points on the human body, offering cross-platform, customizable ML solutions || [https://github.com/google/mediapipe MediaPipe GitHub] || Apache 2.0
|-
| 2 || Detectron2 || Facebook AI Research || High-performance codebase for object detection and segmentation, including pose estimation || [https://github.com/facebookresearch/detectron2 Detectron2 GitHub] || Apache 2.0
|-
| 3 || OpenPose || Carnegie Mellon University || Detecting key points of the human body, including hand, facial, and foot || [https://github.com/CMU-Perceptual-Computing-Lab/openpose OpenPose GitHub] || MIT
|-
| 4 || MoveNet || Google Research || Detecting 17 critical key points of the human body || [https://github.com/tensorflow/tfjs-models/tree/master/posenet MoveNet GitHub] || Apache 2.0
|-
| 5 || PoseNet || Google Research || Detecting different body parts, providing comprehensive skeletal information || [https://github.com/tensorflow/tfjs-models/tree/master/posenet PoseNet GitHub] || Apache 2.0
|-
| 6 || DCPose || Deep Dual Consecutive Network || Detecting human pose from multiple frames, addressing motion blur and occlusions || [https://github.com/DeepDualConsecutivePose/dcpose DCPose GitHub] || MIT
|-
| 7 || DensePose || Facebook AI Research || Mapping human-based pixels from an RGB image to the 3D surface of a human body || [https://github.com/facebookresearch/DensePose DensePose GitHub] || Apache 2.0
|-
| 8 || HigherHRNet || HRNet || Addressing scaling differences in pose prediction, especially for shorter people || [https://github.com/HRNet/HigherHRNet HigherHRNet GitHub] || MIT
|-
| 9 || Lightweight OpenPose || Daniil-Osokin || Real-time inference with minimal accuracy drop, detecting human poses through key points || [https://github.com/Daniil-Osokin/lightweight-human-pose-estimation Lightweight OpenPose GitHub] || MIT
|-
| 10 || AlphaPose || MVIG-SJTU || Detecting multiple individuals in various scenes, achieving high mAP on COCO and MPII datasets || [https://github.com/MVIG-SJTU/AlphaPose AlphaPose GitHub] || MIT
|}
[[File:Pose detection overlay.gif|720px|thumb|Mediapipe Pose Detection]]
<gallery>
Pose_example1.png|About to Stand
Pose_example2.png|Standing but error in leg detection
Pose_example3.png|Foreground missed
Pose_example4.png|Hoodie
</gallery>
[[File:Poseoutput white orig.gif|720px|thumb|Mediapipe Pose Detection]]
776a81ffcec351c1022ca92de74dd2ba5130e608
File:Pose detection overlay.gif
6
320
1476
2024-06-07T05:37:13Z
Vrtnis
21
wikitext
text/x-wiki
Pose detection overlay
37adfb1a1a961e6f65f8d983c6cb774f8a2c6a16
File:Poseoutput.gif
6
321
1478
2024-06-07T05:40:43Z
Vrtnis
21
wikitext
text/x-wiki
Mediapipe pose detection without background
7d07ad192169da0767128c6cb33488f1b06b1c00
File:Pose example1.png
6
322
1481
2024-06-07T05:45:49Z
Vrtnis
21
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Pose example2.png
6
323
1482
2024-06-07T05:46:23Z
Vrtnis
21
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Pose example3.png
6
324
1483
2024-06-07T05:47:17Z
Vrtnis
21
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Pose example4.png
6
325
1484
2024-06-07T05:47:48Z
Vrtnis
21
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
File:Poseoutput white orig.gif
6
326
1488
2024-06-07T06:46:37Z
Vrtnis
21
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Main Page
0
1
1490
1453
2024-06-09T19:18:47Z
Robotgirlfriend
34
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
193d468f3317bb0706f3d91fbdc3ba7f124dd11c
1505
1490
2024-06-11T17:48:44Z
Ben
2
/* List of Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
cd28d52cad1a3ffede262f87922bdde0746e3370
K-Scale Weekly Progress Updates
0
294
1491
1400
2024-06-10T03:17:10Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Link
|-
| [https://x.com/kscalelabs/status/1788968705378181145 2024.05.10]
|-
| [https://x.com/kscalelabs/status/1791507358780461496 2024.05.17]
|-
| [https://x.com/kscalelabs/status/1794109131214712914 2024.05.24]
|-
| [https://x.com/kscalelabs/status/1796617681455775944 2024.05.31]
|-
| [https://x.com/kscalelabs/status/1799197382208590132 2024.06.07]
|}
[[Category:K-Scale]]
fa615b4e6a20512bd2275902649cc15b688ee480
1492
1491
2024-06-10T03:17:24Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Link
|-
| [https://x.com/kscalelabs/status/1799197382208590132 2024.06.07]
|-
| [https://x.com/kscalelabs/status/1796617681455775944 2024.05.31]
|-
| [https://x.com/kscalelabs/status/1794109131214712914 2024.05.24]
|-
| [https://x.com/kscalelabs/status/1791507358780461496 2024.05.17]
|-
| [https://x.com/kscalelabs/status/1788968705378181145 2024.05.10]
|}
[[Category:K-Scale]]
ee87016d8c007ac2f10ff2d79e1ea3afcedc0eca
Robotics For Beginners
0
327
1493
2024-06-10T13:25:43Z
Futurefunk
33
Created page with "=== '''This is a learning guide for absolute beginners who want to build a humanoid walking robot from scratch.''' === ==== '''This assumes the following:''' ==== # You want..."
wikitext
text/x-wiki
=== '''This is a learning guide for absolute beginners who want to build a humanoid walking robot from scratch.''' ===
==== '''This assumes the following:''' ====
# You want to build something substantial, like a walking robot, but have no prior experience.
# You don't know the basic terminology, parts that a humanoid robot should be comprised of. e.g. actuators, servos, etc.
# You don't know the basic software and algorithms that train a humanoid robot to move.
# You don't know how to start building a walking robot with the available open-source resources.
# You want a step-by-step guide of how to build this walking robot.
==== '''FAQs before starting:''' ====
# Do I need to know math, programming and machine learning concepts to build this humanoid robot?
# How much do I need to know before starting?
# What is your teaching approach?
==== '''Basic terminology and why they are important:''' ====
# Actuators
# Difference between underactuated and overactuated
# Servos
'''This is clearly incomplete and a work in progress; you can help by expanding it!'''
6357ddd2178328542bfbdb647118f8e5cacf59f5
1500
1493
2024-06-11T16:04:47Z
Futurefunk
33
/* FAQs before starting: */
wikitext
text/x-wiki
=== '''This is a learning guide for absolute beginners who want to build a humanoid walking robot from scratch.''' ===
==== '''This assumes the following:''' ====
# You want to build something substantial, like a walking robot, but have no prior experience.
# You don't know the basic terminology, parts that a humanoid robot should be comprised of. e.g. actuators, servos, etc.
# You don't know the basic software and algorithms that train a humanoid robot to move.
# You don't know how to start building a walking robot with the available open-source resources.
# You want a step-by-step guide of how to build this walking robot.
==== '''FAQs before starting:''' ====
# Do I need to know math, programming and machine learning concepts to build this humanoid robot?
Yes you do. We will point out concepts to learn and open sourced courses to follow. Learning these concepts from scratch would require a lot of patience and practice.
# Is knowledge in 3D printing required?
Most probably, unless we can find off-the-shelf parts. If we can, they will be documented here. If not, we will document the 3D printing process.
# What is your teaching approach?
We define the goal, which is to build a walking humanoid robot, then backward engineer all the learning material required to turn this goal into a reality. This documentation will aim to provide every single detail required to design, assemble and train a walking humanoid robot. If there's an opportunity to skip technical material that does not contribute to the goal, we will skip it.
==== '''Basic terminology and why they are important:''' ====
# Actuators
# Difference between underactuated and overactuated
# Servos
'''This is clearly incomplete and a work in progress; you can help by expanding it!'''
0756c81bae6e9864cf941406189d2f78cd098024
1501
1500
2024-06-11T16:08:25Z
Futurefunk
33
/* FAQs before starting: */
wikitext
text/x-wiki
=== '''This is a learning guide for absolute beginners who want to build a humanoid walking robot from scratch.''' ===
==== '''This assumes the following:''' ====
# You want to build something substantial, like a walking robot, but have no prior experience.
# You don't know the basic terminology, parts that a humanoid robot should be comprised of. e.g. actuators, servos, etc.
# You don't know the basic software and algorithms that train a humanoid robot to move.
# You don't know how to start building a walking robot with the available open-source resources.
# You want a step-by-step guide of how to build this walking robot.
==== '''FAQs before starting:''' ====
;Do I need to know math, programming and machine learning concepts to build this humanoid robot?:Yes you do. We will point out concepts to learn and open-sourced courses to follow. Learning these concepts from scratch would require a lot of patience and practice. We will point out technical material to skip if not necessary.
# Is knowledge in 3D printing required?
Most probably, unless we can find off-the-shelf parts. If we can, they will be documented here. If not, we will document the 3D printing process.
# What is your teaching approach?
We define the goal, which is to build a walking humanoid robot, then backward engineer all the learning material required to turn this goal into a reality. This documentation will aim to provide every single detail required to design, assemble and train a walking humanoid robot. If there's an opportunity to skip technical material that does not contribute to the goal, we will skip it.
==== '''Basic terminology and why they are important:''' ====
# Actuators
# Difference between underactuated and overactuated
# Servos
'''This is clearly incomplete and a work in progress; you can help by expanding it!'''
c26265e26dbec7c555e5061a9cbd3b978b8d5abd
1502
1501
2024-06-11T16:08:49Z
Futurefunk
33
/* FAQs before starting: */
wikitext
text/x-wiki
=== '''This is a learning guide for absolute beginners who want to build a humanoid walking robot from scratch.''' ===
==== '''This assumes the following:''' ====
# You want to build something substantial, like a walking robot, but have no prior experience.
# You don't know the basic terminology, parts that a humanoid robot should be comprised of. e.g. actuators, servos, etc.
# You don't know the basic software and algorithms that train a humanoid robot to move.
# You don't know how to start building a walking robot with the available open-source resources.
# You want a step-by-step guide of how to build this walking robot.
==== '''FAQs before starting:''' ====
;Do I need to know math, programming and machine learning concepts to build this humanoid robot?:Yes you do. We will point out concepts to learn and open-sourced courses to follow. Learning these concepts from scratch would require a lot of patience and practice. We will point out technical material to skip if not necessary.
;Is knowledge in 3D printing required?:Most probably, unless we can find off-the-shelf parts. If we can, they will be documented here. If not, we will document the 3D printing process.
;What is your teaching approach?:We define the goal, which is to build a walking humanoid robot, then backward engineer all the learning material required to turn this goal into a reality. This documentation will aim to provide every single detail required to design, assemble and train a walking humanoid robot. If there's an opportunity to skip technical material that does not contribute to the goal, we will skip it.
==== '''Basic terminology and why they are important:''' ====
# Actuators
# Difference between underactuated and overactuated
# Servos
'''This is clearly incomplete and a work in progress; you can help by expanding it!'''
c3a8f49dc48ae49a006f63c3a8a039adf9edb187
1503
1502
2024-06-11T16:09:18Z
Futurefunk
33
/* FAQs before starting: */
wikitext
text/x-wiki
=== '''This is a learning guide for absolute beginners who want to build a humanoid walking robot from scratch.''' ===
==== '''This assumes the following:''' ====
# You want to build something substantial, like a walking robot, but have no prior experience.
# You don't know the basic terminology, parts that a humanoid robot should be comprised of. e.g. actuators, servos, etc.
# You don't know the basic software and algorithms that train a humanoid robot to move.
# You don't know how to start building a walking robot with the available open-source resources.
# You want a step-by-step guide of how to build this walking robot.
==== '''FAQs before starting:''' ====
;'''Do I need to know math, programming and machine learning concepts to build this humanoid robot?''':Yes you do. We will point out concepts to learn and open-sourced courses to follow. Learning these concepts from scratch would require a lot of patience and practice. We will point out technical material to skip if not necessary.
;'''Is knowledge in 3D printing required?''':Most probably, unless we can find off-the-shelf parts. If we can, they will be documented here. If not, we will document the 3D printing process.
;'''What is your teaching approach?''':We define the goal, which is to build a walking humanoid robot, then backward engineer all the learning material required to turn this goal into a reality. This documentation will aim to provide every single detail required to design, assemble and train a walking humanoid robot. If there's an opportunity to skip technical material that does not contribute to the goal, we will skip it.
==== '''Basic terminology and why they are important:''' ====
# Actuators
# Difference between underactuated and overactuated
# Servos
'''This is clearly incomplete and a work in progress; you can help by expanding it!'''
0ec88279a05fcf8b19b2909351960a256b99c155
1504
1503
2024-06-11T16:14:24Z
Futurefunk
33
/* Basic terminology and why they are important: */
wikitext
text/x-wiki
=== '''This is a learning guide for absolute beginners who want to build a humanoid walking robot from scratch.''' ===
==== '''This assumes the following:''' ====
# You want to build something substantial, like a walking robot, but have no prior experience.
# You don't know the basic terminology, parts that a humanoid robot should be comprised of. e.g. actuators, servos, etc.
# You don't know the basic software and algorithms that train a humanoid robot to move.
# You don't know how to start building a walking robot with the available open-source resources.
# You want a step-by-step guide of how to build this walking robot.
==== '''FAQs before starting:''' ====
;'''Do I need to know math, programming and machine learning concepts to build this humanoid robot?''':Yes you do. We will point out concepts to learn and open-sourced courses to follow. Learning these concepts from scratch would require a lot of patience and practice. We will point out technical material to skip if not necessary.
;'''Is knowledge in 3D printing required?''':Most probably, unless we can find off-the-shelf parts. If we can, they will be documented here. If not, we will document the 3D printing process.
;'''What is your teaching approach?''':We define the goal, which is to build a walking humanoid robot, then backward engineer all the learning material required to turn this goal into a reality. This documentation will aim to provide every single detail required to design, assemble and train a walking humanoid robot. If there's an opportunity to skip technical material that does not contribute to the goal, we will skip it.
==== '''Basic terminology and why they are important:''' ====
# Actuator
# Gearbox
# Difference between under-actuated and over-actuated
# Servomotor (Servo)
# Firmware
# Robot Operating System (ROS)
# Reinforcement Learning
'''This is clearly incomplete and a work in progress; you can help by expanding it!'''
67bc500d2b5c215151f3db345b027ad825dd8e36
K-Scale Lecture Circuit
0
299
1494
1445
2024-06-10T22:44:39Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
|-
| TBD
| Timothy
|
|-
| 2024.06.14
| Allen
| Language models
|-
| 2024.06.13
| Matt
| CAD software principles
|-
| 2024.06.12
| Ryan
| 3-phase motors
|-
| 2024.06.11
| Vedant
| CAN Protocol
|-
| 2024.06.10
| Isaac
| Speech Papers Round 2
|-
| 2024.06.07
| Ben
| Quantization
|-
| 2024.06.06
| Tom
| Linux Raw
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|-
| 2024.06.04
| Dennis
| Speech representation learning papers
|-
| 2024.06.03
| Paweł
| What I (want to) believe in
|-
| 2024.05.30
| Isaac
| VLMs
|-
| 2024.05.29
| Allen
| PPO
|}
[[Category: K-Scale]]
a2630f155bd8f95f029b29c47049217d70a41b54
1495
1494
2024-06-10T22:45:13Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
|-
| TBD
| Timothy
|
|-
| 2024.06.14
| Allen
| [https://github.com/karpathy/llama2.c llama.c]
|-
| 2024.06.13
| Matt
| CAD software principles
|-
| 2024.06.12
| Ryan
| 3-phase motors
|-
| 2024.06.11
| Vedant
| CAN Protocol
|-
| 2024.06.10
| Isaac
| Speech Papers Round 2
|-
| 2024.06.07
| Ben
| Quantization
|-
| 2024.06.06
| Tom
| Linux Raw
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|-
| 2024.06.04
| Dennis
| Speech representation learning papers
|-
| 2024.06.03
| Paweł
| What I (want to) believe in
|-
| 2024.05.30
| Isaac
| VLMs
|-
| 2024.05.29
| Allen
| PPO
|}
[[Category: K-Scale]]
d12bccc920a7b1bfaaf3a0567246871c4bca7d7c
1496
1495
2024-06-10T22:45:30Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
|-
| TBD
| Timothy
|
|-
| 2024.06.14
| Allen
| [https://github.com/karpathy/llm.c llm.c]
|-
| 2024.06.13
| Matt
| CAD software principles
|-
| 2024.06.12
| Ryan
| 3-phase motors
|-
| 2024.06.11
| Vedant
| CAN Protocol
|-
| 2024.06.10
| Isaac
| Speech Papers Round 2
|-
| 2024.06.07
| Ben
| Quantization
|-
| 2024.06.06
| Tom
| Linux Raw
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|-
| 2024.06.04
| Dennis
| Speech representation learning papers
|-
| 2024.06.03
| Paweł
| What I (want to) believe in
|-
| 2024.05.30
| Isaac
| VLMs
|-
| 2024.05.29
| Allen
| PPO
|}
[[Category: K-Scale]]
a1cffbe145f008ebaf835815c6afec9bd7678048
1497
1496
2024-06-10T22:48:48Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
|-
| TBD
| Timothy
|
|-
| 2024.06.14
| Allen
| [https://github.com/karpathy/llm.c llm.c]
|-
| 2024.06.13
| Matt
| CAD software principles
|-
| 2024.06.12
| Ryan
| 3-phase motors
|-
| 2024.06.11
| Vedant
| CAN Protocol
|-
| 2024.06.10
| Isaac
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech Papers Round 2]
|-
| 2024.06.07
| Ben
| Quantization
|-
| 2024.06.06
| Tom
| Linux Raw
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|-
| 2024.06.04
| Dennis
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech representation learning papers]
|-
| 2024.06.03
| Paweł
| What I (want to) believe in
|-
| 2024.05.30
| Isaac
| VLMs
|-
| 2024.05.29
| Allen
| PPO
|}
[[Category: K-Scale]]
07e9874257e0fc441d4821436ef8b114d0d367a3
1498
1497
2024-06-10T22:52:48Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
|-
| TBD
| Timothy
|
|-
| 2024.06.15
| Matt
| CAD software deep dive
|-
| 2024.06.14
| Allen
| [https://github.com/karpathy/llm.c llm.c]
|-
| 2024.06.13
| Kenji
| Principles of BLDCs
|-
| 2024.06.12
| Ryan
| 3-phase motors
|-
| 2024.06.11
| Vedant
| CAN Protocol
|-
| 2024.06.10
| Isaac
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech Papers Round 2]
|-
| 2024.06.07
| Ben
| Quantization
|-
| 2024.06.06
| Tom
| Linux Raw
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|-
| 2024.06.04
| Dennis
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech representation learning papers]
|-
| 2024.06.03
| Paweł
| What I (want to) believe in
|-
| 2024.05.30
| Isaac
| VLMs
|-
| 2024.05.29
| Allen
| PPO
|}
[[Category: K-Scale]]
7b27c2572ea61f26a98e63137ffeb2d92d9098c7
1499
1498
2024-06-10T22:53:29Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
|-
| TBD
| Timothy
|
|-
| 2024.06.15
| Matt
| CAD software deep dive
|-
| 2024.06.14
| Allen
| [https://github.com/karpathy/llm.c llm.c]
|-
| 2024.06.13
| Kenji
| Engineering principles of BLDCs
|-
| 2024.06.12
| Ryan
| 3-phase motors
|-
| 2024.06.11
| Vedant
| CAN Protocol
|-
| 2024.06.10
| Isaac
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech Papers Round 2]
|-
| 2024.06.07
| Ben
| Quantization
|-
| 2024.06.06
| Tom
| Linux Raw
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|-
| 2024.06.04
| Dennis
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech representation learning papers]
|-
| 2024.06.03
| Paweł
| What I (want to) believe in
|-
| 2024.05.30
| Isaac
| VLMs
|-
| 2024.05.29
| Allen
| PPO
|}
[[Category: K-Scale]]
14716f6056391517afeb33f6a72eb23273fc410d
Alex
0
328
1506
2024-06-11T18:00:35Z
Ben
2
Created page with "Alex<ref>https://boardwalkrobotics.com/Alex.html</ref> is a robot designed by [[Boardwalk Robotics]]. {{infobox robot | name = Alex | organization = [[Boardwalk Robotics]] }}..."
wikitext
text/x-wiki
Alex<ref>https://boardwalkrobotics.com/Alex.html</ref> is a robot designed by [[Boardwalk Robotics]].
{{infobox robot
| name = Alex
| organization = [[Boardwalk Robotics]]
}}
=== References ===
<references />
[[Category:Robots]]
96e681ec117df013cde574d61f1d0e143abb2239
Robot Descriptions List
0
281
1508
1507
2024-06-11T19:27:12Z
Vrtnis
21
/*Mekamon*/
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 34 || MIR-Lola || Munich Institute of Robotics and Machine Intelligence || URDF || [https://github.com/mir-lab/lola_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 35 || HSR || Toyota || URDF || [https://github.com/toyota-research-institute/hsr_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 36 || Pepper 2 || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 37 || BHR-4 || Beijing Institute of Technology || URDF || [https://github.com/bit-bots/bhr4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 38 || Tiago || PAL Robotics || URDF || [https://github.com/pal-robotics/tiago_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 39 || InMoov || Gael Langevin || URDF || [https://github.com/InMoov/inmoov_ros URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 40 || BipedalWalker || OpenAI Gym || URDF || [https://github.com/openai/gym/tree/master/gym/envs/robotics/assets/bipedal_walker URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 41 || BOLT || Istituto Italiano di Tecnologia || URDF || [https://github.com/robotology/icub-main/tree/master/app/robots/bolt URDF] || GPL-2.0 || ✔️ || ✔️ || ✔️
|-
| 42 || HRP-4 || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 43 || PI4 || PAL Robotics || URDF || [https://github.com/pal-robotics/pi4_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 44 || HRP-7P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp7p_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 45 || Juno || UC Berkeley || URDF || [https://github.com/BerkeleyAutomation/juno_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 46 || Leo || Georgia Institute of Technology || URDF || [https://github.com/GeorgiaTechRobotLearning/leo_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 47 || Apollo || University of Pennsylvania || URDF || [https://github.com/penn-robotics/apollo_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 48 || Scuttle || Open Robotics || URDF || [https://github.com/openrobotics/scuttle_description URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 49 || MekaMon || Reach Robotics || URDF || [https://github.com/reachrobotics/mekamon_robot URDF] || MIT || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* GitHub and web searches
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
== Citation ==
<pre>
@misc{humanoids-2024,
title={Robot Descriptions List},
author={K-Scale Humanoids Wiki Contributors},
year={2024},
url={https://humanoids.wiki/w/Robot_Descriptions_List}
}
</pre>
62c92352e4f6779a7b9b59fae9ca79472bc4a1a0
1515
1508
2024-06-11T19:50:13Z
Vrtnis
21
wikitext
text/x-wiki
=== Humanoids ===
{| class="wikitable sortable"
|-
! Sr No !! Name !! Maker !! Formats !! URL !! License !! Meshes !! Inertias !! Collisions
|-
| 1 || Stompy || K-Scale Labs || URDF || [https://stompy.kscale.dev URDF], [https://stompy.kscale.dev MJCF] || MIT || ✔️ || ✔️ || ✔️
|-
| 2 || Digit || Agility Robotics || URDF || [https://github.com/adubredu/DigitRobot.jl URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 3 || H1 || UNITREE Robotics || MJCF || [https://github.com/google-deepmind/mujoco_menagerie/tree/main/unitree_h1 MJCF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 4 || Atlas v4 || Boston Dynamics || URDF || [https://github.com/openai/roboschool/tree/1.0.49/roboschool/models_robot/atlas_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 5 || Valkyrie || NASA JSC Robotics || URDF, Xacro || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/val_description/model URDF], [https://gitlab.com/nasa-jsc-robotics/val_description Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 6 || JVRC-1 || AIST || MJCF, URDF || [https://github.com/isri-aist/jvrc_mj_description/ MJCF], [https://github.com/stephane-caron/jvrc_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 7 || iCub || IIT || URDF || [https://github.com/robotology/icub-models/tree/master/iCub URDF] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 8 || JAXON || JSK || COLLADA, URDF, VRML || [https://github.com/stephane-caron/openrave_models/tree/master/JAXON COLLADA], [https://github.com/robot-descriptions/jaxon_description URDF], [https://github.com/start-jsk/rtmros_choreonoid/tree/master/jvrc_models/JAXON_JVRC VRML] || CC-BY-SA-4.0 || ✔️ || ✔️ || ✔️
|-
| 9 || Atlas DRC (v3) || Boston Dynamics || URDF || [https://github.com/RobotLocomotion/models/blob/master/atlas/atlas_convex_hull.urdf URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 10 || Gundam RX-78 || Bandai Namco Filmworks || URDF || [https://github.com/gundam-global-challenge/gundam_robot/tree/master/gundam_rx78_description URDF] || ✖️ || ✔️ || ✔️ || ✔️
|-
| 11 || Romeo || Aldebaran Robotics || URDF || [https://github.com/ros-aldebaran/romeo_robot/tree/master/romeo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 12 || SigmaBan || Rhoban || URDF || [https://github.com/Rhoban/sigmaban_urdf URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 13 || Robonaut 2 || NASA JSC Robotics || URDF || [https://github.com/gkjohnson/nasa-urdf-robots/tree/master/r2_description URDF] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 14 || TALOS || PAL Robotics || URDF, Xacro || [https://github.com/stack-of-tasks/talos-data URDF], [https://github.com/pal-robotics/talos_robot/tree/kinetic-devel/talos_description Xacro] || LGPL-3.0, Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 15 || WALK-MAN || IIT || Xacro || [https://github.com/ADVRHumanoids/iit-walkman-ros-pkg/tree/master/walkman_urdf Xacro] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 16 || Draco3 || Apptronik || URDF || [https://github.com/shbang91/draco3_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 17 || ergoCub || IIT || URDF || [https://github.com/icub-tech-iit/ergocub-software/tree/master/urdf/ergoCub URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 18 || Baxter || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/baxter.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 19 || Pepper || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 20 || Mini-Cheetah || MIT || URDF || [https://github.com/MIT-Mini-Cheetah/mini-cheetah URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 21 || Thor-Mang || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/ROBOTIS-MANIPULATION-THORMANG URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 22 || Cassie || Agility Robotics || URDF || [https://github.com/agilityrobotics/cassie_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 23 || Sophia || Hanson Robotics || URDF || [https://github.com/hansonrobotics/sophia_robot URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 24 || Asimo || Honda || URDF || [https://github.com/honda/asimo_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 25 || HRP-5P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp5p URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 26 || Valkyrie R5 || NASA || URDF, Xacro || [https://github.com/nasa/valkyrie_simulation URDF], [https://github.com/nasa/valkyrie_robot Xacro] || NASA-1.3 || ✔️ || ✔️ || ✔️
|-
| 27 || REEM-C || PAL Robotics || URDF || [https://github.com/pal-robotics/reemc_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 28 || Darwin-OP || ROBOTIS || URDF || [https://github.com/ROBOTIS-GIT/Darwin_OP_ROS URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 29 || Poppy || Inria Flowers || URDF || [https://github.com/poppy-project/poppy_humanoid URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 30 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 31 || SURALP || Istanbul Technical University || URDF || [https://github.com/suralp/suralp URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 32 || Kengoro || JSK || URDF || [https://github.com/jsk-ros-pkg/jsk_models/tree/master/kengoro_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 33 || ANYmal || ANYbotics || URDF || [https://github.com/leggedrobotics/anymal_b_simple_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 34 || MIR-Lola || Munich Institute of Robotics and Machine Intelligence || URDF || [https://github.com/mir-lab/lola_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 35 || HSR || Toyota || URDF || [https://github.com/toyota-research-institute/hsr_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 36 || Pepper 2 || SoftBank Robotics || URDF || [https://github.com/ros-naoqi/pepper_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 37 || BHR-4 || Beijing Institute of Technology || URDF || [https://github.com/bit-bots/bhr4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 38 || Tiago || PAL Robotics || URDF || [https://github.com/pal-robotics/tiago_robot URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 39 || InMoov || Gael Langevin || URDF || [https://github.com/InMoov/inmoov_ros URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 40 || BipedalWalker || OpenAI Gym || URDF || [https://github.com/openai/gym/tree/master/gym/envs/robotics/assets/bipedal_walker URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 41 || BOLT || Istituto Italiano di Tecnologia || URDF || [https://github.com/robotology/icub-main/tree/master/app/robots/bolt URDF] || GPL-2.0 || ✔️ || ✔️ || ✔️
|-
| 42 || HRP-4 || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp4_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 43 || PI4 || PAL Robotics || URDF || [https://github.com/pal-robotics/pi4_description URDF] || LGPL-3.0 || ✔️ || ✔️ || ✔️
|-
| 44 || HRP-7P || Kawada Robotics || URDF || [https://github.com/kawada-robotics/hrp7p_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 45 || Juno || UC Berkeley || URDF || [https://github.com/BerkeleyAutomation/juno_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| 46 || Leo || Georgia Institute of Technology || URDF || [https://github.com/GeorgiaTechRobotLearning/leo_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 47 || Apollo || University of Pennsylvania || URDF || [https://github.com/penn-robotics/apollo_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| 48 || Scuttle || Open Robotics || URDF || [https://github.com/openrobotics/scuttle_description URDF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| 49 || MekaMon || Reach Robotics || URDF || [https://github.com/reachrobotics/mekamon_robot URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| 50 || Roboy || TUM || URDF || [https://github.com/roboy/roboy_description URDF] || GPL-3.0 || ✔️ || ✔️ || ✔️
|}
=== End Effectors ===
{| class="wikitable"
|-
! Name !! Maker !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Allegro Hand || Wonik Robotics || URDF, MJCF || [https://github.com/RobotLocomotion/models/tree/master/allegro_hand_description/urdf URDF], [https://github.com/google-deepmind/mujoco_menagerie/tree/main/wonik_allegro MJCF] || BSD || ✔️ || ✔️ || ✔️
|-
| Shadow Hand E3M5 || The Shadow Robot Company || MJCF || [https://github.com/deepmind/mujoco_menagerie/tree/main/shadow_hand MJCF] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Robotiq 2F-85 || Robotiq || MJCF, URDF, Xacro || [https://github.com/deepmind/mujoco_menagerie/tree/main/robotiq_2f85 MJCF], [https://github.com/a-price/robotiq_arg85_description URDF], [https://github.com/ros-industrial/robotiq/tree/kinetic-devel/robotiq_2f_85_gripper_visualization Xacro] || BSD-2-Clause || ✔️ || ✔️ || ✔️
|-
| BarrettHand || Barrett Technology || URDF || [https://github.com/jhu-lcsr-attic/bhand_model/tree/master/robots URDF] || BSD || ✖️ || ✔️ || ✔️
|-
| WSG 50 || SCHUNK || SDF || [https://github.com/RobotLocomotion/models/tree/master/wsg_50_description SDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Baxter Left End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/left_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|-
| Baxter Right End Effector || Rethink Robotics || URDF, Xacro || [https://github.com/RethinkRobotics/baxter_common/tree/master/baxter_description/urdf/right_end_effector.urdf.xacro URDF, Xacro] || Apache-2.0 || ✔️ || ✔️ || ✔️
|}
=== Educational ===
{| class="wikitable"
|-
! Name !! Formats !! File !! License !! Meshes !! Inertias !! Collisions
|-
| Double Pendulum || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/double_pendulum_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|-
| Simple Humanoid || URDF || [https://github.com/laas/simple_humanoid_description URDF] || BSD-2-Clause || ✔️ || ✔️ || ✖️
|-
| TriFingerEdu || URDF || [https://github.com/facebookresearch/differentiable-robot-model/tree/main/diff_robot_data/trifinger_edu_description URDF] || MIT || ✔️ || ✔️ || ✔️
|-
| FingerEdu || URDF || [https://github.com/Gepetto/example-robot-data/tree/master/robots/finger_edu_description URDF] || BSD-3-Clause || ✔️ || ✔️ || ✔️
|}
== References ==
* GitHub and web searches
* https://github.com/robot-descriptions/awesome-robot-descriptions
* https://github.com/robot-descriptions/robot_descriptions.py
* https://github.com/robotology
== Citation ==
<pre>
@misc{humanoids-2024,
title={Robot Descriptions List},
author={K-Scale Humanoids Wiki Contributors},
year={2024},
url={https://humanoids.wiki/w/Robot_Descriptions_List}
}
</pre>
cb17160ee0e6e64bedc9fbafb32f6a0e14197eca
Boardwalk Robotics
0
329
1509
2024-06-11T19:30:21Z
Vrtnis
21
Created page with "Boardwalk Robotics specializes in legged robotic systems, focusing on mechanical design, hardware development, software, and control systems. Based in Pensacola, Florida, the..."
wikitext
text/x-wiki
Boardwalk Robotics specializes in legged robotic systems, focusing on mechanical design, hardware development, software, and control systems. Based in Pensacola, Florida, the company operates as a private entity and has a team size of 1-10 employees.
a56e5352182f4441bd8ecc7379db9de24651f19c
1510
1509
2024-06-11T19:31:07Z
Vrtnis
21
wikitext
text/x-wiki
{{Infobox company
| name = Boardwalk Robotics
| country = United States
| headquarters = Pensacola
| website = [boardwalkrobotics.com](http://boardwalkrobotics.com)
}}
Boardwalk Robotics specializes in legged robotic systems, focusing on mechanical design, hardware development, software, and control systems. Based in Pensacola, Florida, the company operates as a private entity and has a team size of 1-10 employees.
bdd7114ffd1867d52cb38d42e5ea24a2092d26de
1511
1510
2024-06-11T19:32:38Z
Vrtnis
21
wikitext
text/x-wiki
{{infobox company
| name = Boardwalk Robotics
| country = United States
| website_link = https://www.boardwalkrobotics.com
| specialties = Legged Robotic Systems
}}
Boardwalk Robotics specializes in legged robotic systems, focusing on mechanical design, hardware development, software, and control systems. Based in Pensacola, Florida, the company operates as a private entity and has a team size of 1-10 employees.
dfd8b1d37d7206f93eea79b7e3330729a9d4e237
1512
1511
2024-06-11T19:33:24Z
Vrtnis
21
wikitext
text/x-wiki
{{infobox company
| name = Boardwalk Robotics
| country = United States
| website_link = https://www.boardwalkrobotics.com
| robots = Alex
}}
Boardwalk Robotics specializes in legged robotic systems, focusing on mechanical design, hardware development, software, and control systems. Based in Pensacola, Florida, the company operates as a private entity and has a team size of 1-10 employees.
c9a3d620364ddb7defaae45bd704af09b906b1cd
1513
1512
2024-06-11T19:33:43Z
Vrtnis
21
wikitext
text/x-wiki
{{infobox company
| name = Boardwalk Robotics
| country = United States
| website_link = https://www.boardwalkrobotics.com
| robots = [[Alex]]
}}
Boardwalk Robotics specializes in legged robotic systems, focusing on mechanical design, hardware development, software, and control systems. Based in Pensacola, Florida, the company operates as a private entity and has a team size of 1-10 employees.
340cc2553f5d07a5f0e6e8b6fa2a5d700238d550
1514
1513
2024-06-11T19:34:17Z
Vrtnis
21
wikitext
text/x-wiki
{{infobox company
| name = Boardwalk Robotics
| country = United States
| website_link = https://www.boardwalkrobotics.com
| robots = [[Alex]]
}}
Boardwalk Robotics specializes in legged robotic systems, focusing on mechanical design, hardware development, software, and control systems. It is based in Pensacola, Florida.
ab0a3de60d1fdd203e2f14d3de4f80dab325f756
Jetson: MCP2515
0
330
1516
2024-06-13T20:39:03Z
Vedant
24
Created page with "=== Configuration === * Can access the Jetson Expansion Header Tool through <code>/opt/nvidia/jetson-io/jetson-io.py</code>"
wikitext
text/x-wiki
=== Configuration ===
* Can access the Jetson Expansion Header Tool through <code>/opt/nvidia/jetson-io/jetson-io.py</code>
b16e2e046219d9d274e5a8b7430407695d97a827
1517
1516
2024-06-13T20:53:20Z
Vedant
24
wikitext
text/x-wiki
=== Configuration ===
* Can access the Jetson Expansion Header Tool through <code>/opt/nvidia/jetson-io/jetson-io.py</code>
Color Convention:
Red: 3.3V (Pin 1)
Black: Gnd (Pin 39)
Green: CS (Pin 12)
Orange: MISO (Pin 21)
Blue: MOSI (Pin 19)
Yellow: SCK (Pin 23)
Purple: Interrupt (Pin 16)
c8bbf9bbfd0d8447a9ecfe76cf07afbe183672e0
1518
1517
2024-06-13T20:53:48Z
Vedant
24
wikitext
text/x-wiki
=== Configuration ===
* Can access the Jetson Expansion Header Tool through <code>/opt/nvidia/jetson-io/jetson-io.py</code>
Color Convention:
* Red: 3.3V (Pin 1)
* Black: Gnd (Pin 39)
* Green: CS (Pin 12)
* Orange: MISO (Pin 21)
* Blue: MOSI (Pin 19)
* Yellow: SCK (Pin 23)
* Purple: Interrupt (Pin 16)
67b09d1add26bdd15088b942224c13e30763521e
1519
1518
2024-06-13T20:56:28Z
Vedant
24
wikitext
text/x-wiki
=== Configuration ===
* Can access the Jetson Expansion Header Tool through <code>/opt/nvidia/jetson-io/jetson-io.py</code>
Color Convention:
* Red: 3.3V (Pin 1)
* Black: Gnd (Pin 39)
* Green: CS
* Orange: MISO (Pin 21)
* Blue: MOSI (Pin 19)
* Yellow: SCK (Pin 23)
* Purple: Interrupt
Configurations for Interrupt and CS Pins:
* Interrupt2: (Pin 16)
* CS2: (Pin 12)
be379d44abdf520d56273ae6151795477a7baaa9
1520
1519
2024-06-13T20:58:59Z
Vedant
24
wikitext
text/x-wiki
=== Configuration ===
* Can access the Jetson Expansion Header Tool through <code>/opt/nvidia/jetson-io/jetson-io.py</code>
Color Convention:
* Red: 3.3V (Pin 1)
* Black: Gnd (Pin 39)
* Green: CS
* Orange: MISO (Pin 21)
* Blue: MOSI (Pin 19)
* Yellow: SCK (Pin 23)
* Purple: Interrupt
Configurations for Interrupt and CS Pins:
* Interrupt: (Pin 16) + (Pin 32)
* CS: (Pin 12) + (Pin 38)
3859eff098e73ba97af91c1949e786c6a129b4c5
1521
1520
2024-06-13T21:00:41Z
Vedant
24
wikitext
text/x-wiki
=== Configuration ===
* Can access the Jetson Expansion Header Tool through the command <code>sudo /opt/nvidia/jetson-io/jetson-io.py</code>
Color Convention:
* Red: 3.3V (Pin 1)
* Black: Gnd (Pin 39)
* Green: CS
* Orange: MISO (Pin 21)
* Blue: MOSI (Pin 19)
* Yellow: SCK (Pin 23)
* Purple: Interrupt
Configurations for Interrupt and CS Pins:
* Interrupt: (Pin 16) + (Pin 32)
* CS: (Pin 12) + (Pin 38)
d7a41ec6155078cf9c5cd33063a88c97a50b9878
AstriBot Corporation
0
152
1522
621
2024-06-13T22:26:28Z
108.211.178.220
0
wikitext
text/x-wiki
The [[Astribot S1]] is a product of the Chinese tech company Astribot Corporation, a subsidiary of Stardust Intelligence. This firm has made notable strides in the field of AI-controlled humanoid robots, developing the Astribot S1 as a highly efficient helper capable of lifting objects weighing up to 10 kilograms and moving at a speed of 10 meters per second.<ref>https://elblog.pl/2024/04/28/astribot-corporations-s1-robot-promises-swift-and-skilled-assistance/</ref>
AstriBot Corporation started operation in 2022 and took just a year to develop its first humanoid robot, S1.<ref>https://www.msn.com/en-us/news/other/china-s-s1-robot-impresses-with-its-human-like-speed-and-precision/ar-AA1nJ0BG</ref>
== Overview ==
AstriBot Corporation, a subsidiary of Stardust Intelligence, is a Chinese tech company responsible for the creation and development of the Astribot S1, an efficient and capable humanoid robotic assistant. Known for its impressive parameters, such as the ability to lift up to 10 kilograms and move at speeds of 10 meters per second, Astribot S1 represents significant advancements in the field of AI-controlled humanoid robotics. The company began operations in 2022 and managed to develop its first humanoid robot, Astribot S1, in just a year.
{{infobox company
| name = AstriBot Corporation
| country = China
| website_link =
| robots = Astribot S1
}}
== References ==
<references />
[[Category:Companies]]
03424f803eb670736c39950b498adcdc2ed84ed3
Main Page
0
1
1523
1505
2024-06-14T21:02:19Z
99.57.142.247
0
/* List of Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
f7efa7d18b043676ed9ea6a975f7e2c489e5dc53
THK
0
331
1524
2024-06-14T21:09:14Z
99.57.142.247
0
Created page with "THK is a Japanese manufacturer of machine parts. They have demonstrated many humanoid robots including this one that claims to be the fastest running humanoid in the world at..."
wikitext
text/x-wiki
THK is a Japanese manufacturer of machine parts. They have demonstrated many humanoid robots including this one that claims to be the fastest running humanoid in the world at 3.5 m/s: https://youtu.be/U5ve7_K85mk
af1390b513d5d1d4f80555390cec15b5efc6b347
K-Scale Lecture Circuit
0
299
1525
1499
2024-06-17T16:51:44Z
185.169.0.177
0
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
|-
| TBD
| Timothy
|
|-
| 2024.06.17
| Matt
| CAD software deep dive
|-
| 2024.06.14
| Allen
| [https://github.com/karpathy/llm.c llm.c]
|-
| 2024.06.13
| Kenji
| Engineering principles of BLDCs
|-
| 2024.06.12
| Ryan
| 3-phase motors
|-
| 2024.06.11
| Vedant
| CAN Protocol
|-
| 2024.06.10
| Isaac
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech Papers Round 2]
|-
| 2024.06.07
| Ben
| Quantization
|-
| 2024.06.06
| Tom
| Linux Raw
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|-
| 2024.06.04
| Dennis
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech representation learning papers]
|-
| 2024.06.03
| Paweł
| What I (want to) believe in
|-
| 2024.05.30
| Isaac
| VLMs
|-
| 2024.05.29
| Allen
| PPO
|}
[[Category: K-Scale]]
2cd446c572e473e89f8a0dc0301cb12b9ebd4002
1527
1525
2024-06-18T15:39:47Z
Budzianowski
19
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
|-
| TBD
| Timothy
|
|-
| 2024.06.27
| Paweł
| OpenVLA
|-
| 2024.06.17
| Matt
| CAD software deep dive
|-
| 2024.06.14
| Allen
| [https://github.com/karpathy/llm.c llm.c]
|-
| 2024.06.13
| Kenji
| Engineering principles of BLDCs
|-
| 2024.06.12
| Ryan
| 3-phase motors
|-
| 2024.06.11
| Vedant
| CAN Protocol
|-
| 2024.06.10
| Isaac
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech Papers Round 2]
|-
| 2024.06.07
| Ben
| Quantization
|-
| 2024.06.06
| Tom
| Linux Raw
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|-
| 2024.06.04
| Dennis
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech representation learning papers]
|-
| 2024.06.03
| Paweł
| What I (want to) believe in
|-
| 2024.05.30
| Isaac
| VLMs
|-
| 2024.05.29
| Allen
| PPO
|}
[[Category: K-Scale]]
2bb3eda52300a7f02b6ee2cd684e2a40fc1c0dad
1539
1527
2024-06-19T00:04:28Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
|-
| TBD
| Timothy
|
|-
| 2024.06.27
| Paweł
| OpenVLA
|-
| 2024.06.21
| Introduction to KiCAD
| Ryan
|-
| 2024.06.20
| Kenji
| Principles of Power Electronics
|-
| 2024.06.19
| Allen
| Neural Network Loss Functions
|-
| 2024.06.18
| Ben
| Neural network inference on the edge
|-
| 2024.06.17
| Matt
| CAD software deep dive
|-
| 2024.06.14
| Allen
| [https://github.com/karpathy/llm.c llm.c]
|-
| 2024.06.13
| Kenji
| Engineering principles of BLDCs
|-
| 2024.06.12
| Ryan
| 3-phase motors
|-
| 2024.06.11
| Vedant
| CAN Protocol
|-
| 2024.06.10
| Isaac
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech Papers Round 2]
|-
| 2024.06.07
| Ben
| Quantization
|-
| 2024.06.06
| Tom
| Linux Raw
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|-
| 2024.06.04
| Dennis
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech representation learning papers]
|-
| 2024.06.03
| Paweł
| What I (want to) believe in
|-
| 2024.05.30
| Isaac
| VLMs
|-
| 2024.05.29
| Allen
| PPO
|}
[[Category: K-Scale]]
ae681be53fbcbd8ac343e7ab9b9b7aec67c800a3
1540
1539
2024-06-19T00:05:06Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
|-
| TBD
| Timothy
|
|-
| 2024.06.27
| Paweł
| OpenVLA
|-
| 2024.06.21
| Ryan
| Introduction to KiCAD
|-
| 2024.06.20
| Kenji
| Principles of Power Electronics
|-
| 2024.06.19
| Allen
| Neural Network Loss Functions
|-
| 2024.06.18
| Ben
| Neural network inference on the edge
|-
| 2024.06.17
| Matt
| CAD software deep dive
|-
| 2024.06.14
| Allen
| [https://github.com/karpathy/llm.c llm.c]
|-
| 2024.06.13
| Kenji
| Engineering principles of BLDCs
|-
| 2024.06.12
| Ryan
| 3-phase motors
|-
| 2024.06.11
| Vedant
| CAN Protocol
|-
| 2024.06.10
| Isaac
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech Papers Round 2]
|-
| 2024.06.07
| Ben
| Quantization
|-
| 2024.06.06
| Tom
| Linux Raw
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|-
| 2024.06.04
| Dennis
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech representation learning papers]
|-
| 2024.06.03
| Paweł
| What I (want to) believe in
|-
| 2024.05.30
| Isaac
| VLMs
|-
| 2024.05.29
| Allen
| PPO
|}
[[Category: K-Scale]]
056494e249fee25eb10c25b781932985049d2bb9
1544
1540
2024-06-20T00:46:28Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
|-
| 2024.06.28
| Kenji
| Principles of Power Electronics
|-
| 2024.06.27
| Paweł
| OpenVLA
|-
| 2024.06.21
| Ryan
| Introduction to KiCAD
|-
| 2024.06.20
| Timothy
| Diffusion
|-
| 2024.06.19
| Allen
| Neural Network Loss Functions
|-
| 2024.06.18
| Ben
| Neural network inference on the edge
|-
| 2024.06.17
| Matt
| CAD software deep dive
|-
| 2024.06.14
| Allen
| [https://github.com/karpathy/llm.c llm.c]
|-
| 2024.06.13
| Kenji
| Engineering principles of BLDCs
|-
| 2024.06.12
| Ryan
| 3-phase motors
|-
| 2024.06.11
| Vedant
| CAN Protocol
|-
| 2024.06.10
| Isaac
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech Papers Round 2]
|-
| 2024.06.07
| Ben
| Quantization
|-
| 2024.06.06
| Tom
| Linux Raw
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|-
| 2024.06.04
| Dennis
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech representation learning papers]
|-
| 2024.06.03
| Paweł
| What I (want to) believe in
|-
| 2024.05.30
| Isaac
| VLMs
|-
| 2024.05.29
| Allen
| PPO
|}
[[Category: K-Scale]]
9b2c5c1dc8b082eda67dd394570ad8a460fee405
Qualcomm Neural Processing Unit
0
332
1526
2024-06-18T00:12:16Z
185.169.0.177
0
Created page with "The Qualcomm NPU is a neural network accelerator for Qualcomm chips. * [https://www.qualcomm.com/developer/software/neural-processing-sdk-for-ai Qualcomm NPU SDK] * [https://..."
wikitext
text/x-wiki
The Qualcomm NPU is a neural network accelerator for Qualcomm chips.
* [https://www.qualcomm.com/developer/software/neural-processing-sdk-for-ai Qualcomm NPU SDK]
* [https://github.com/quic/ai-hub-models AI Hub Models]
fa214f1975d0880fa1eb6f9e137d9bef3074edff
K-Scale Weekly Progress Updates
0
294
1528
1492
2024-06-18T19:23:15Z
108.211.178.220
0
wikitext
text/x-wiki
{| class="wikitable"
|-
! Link
|-
| [https://x.com/kscalelabs/status/1801749382167204086 2024.06.14]
|-
| [https://x.com/kscalelabs/status/1799197382208590132 2024.06.07]
|-
| [https://x.com/kscalelabs/status/1796617681455775944 2024.05.31]
|-
| [https://x.com/kscalelabs/status/1794109131214712914 2024.05.24]
|-
| [https://x.com/kscalelabs/status/1791507358780461496 2024.05.17]
|-
| [https://x.com/kscalelabs/status/1788968705378181145 2024.05.10]
|}
[[Category:K-Scale]]
b59a6757d23994b1c1a2573cdb5ab235aa72902e
Onshape talk notes
0
333
1529
2024-06-18T22:26:42Z
Vrtnis
21
/* add notes*/
wikitext
text/x-wiki
===== Starting with 2D Sketches=====
Begin CAD designs with 2D sketches on planes, then extrude to create 3D shapes.
===== Dimensioning =====
Proper dimensioning ensures accuracy, transforming sketches from blue (undefined) to black (fully defined).
eba6c8d8bba0f0c5f035f03fb168a58646f26a6d
1530
1529
2024-06-18T22:28:14Z
Vrtnis
21
wikitext
text/x-wiki
Notes from onshape talk by [[User:Matt]]
===== Starting with 2D Sketches=====
Begin CAD designs with 2D sketches on planes, then extrude to create 3D shapes.
===== Dimensioning =====
Proper dimensioning ensures accuracy, transforming sketches from blue (undefined) to black (fully defined).
bd944897722cb82a628d26823f8c7449401cd72c
1531
1530
2024-06-18T22:29:00Z
Vrtnis
21
wikitext
text/x-wiki
Notes from onshape talk by [[User:Matt]]
===== Starting with 2D Sketches=====
Begin CAD designs with 2D sketches on planes, then extrude to create 3D shapes.
===== Dimensioning =====
Proper dimensioning ensures accuracy, transforming sketches from blue (undefined) to black (fully defined).
===== Extrude Tool =====
Converts 2D sketches into 3D objects. Options like "new," "add," and "remove" are key for modifying shapes.
51b8f9ea14fb23677b931cc79a4da5add2564d56
1532
1531
2024-06-18T22:30:01Z
Vrtnis
21
wikitext
text/x-wiki
Notes from onshape talk by [[User:Matt]] as a part of [[K-Scale_Lecture_Circuit]]
===== Starting with 2D Sketches=====
Begin CAD designs with 2D sketches on planes, then extrude to create 3D shapes.
===== Dimensioning =====
Proper dimensioning ensures accuracy, transforming sketches from blue (undefined) to black (fully defined).
===== Extrude Tool =====
Converts 2D sketches into 3D objects. Options like "new," "add," and "remove" are key for modifying shapes.
3eab1841a0ec14f8adc32cfb4202bba4681d4f8c
1533
1532
2024-06-18T22:30:55Z
Vrtnis
21
wikitext
text/x-wiki
Notes from onshape talk by [[User:Matt]] as a part of [[K-Scale_Lecture_Circuit]]
===== Starting with 2D Sketches=====
Begin CAD designs with 2D sketches on planes, then extrude to create 3D shapes.
===== Dimensioning =====
Proper dimensioning ensures accuracy, transforming sketches from blue (undefined) to black (fully defined).
===== Extrude Tool =====
Converts 2D sketches into 3D objects. Options like "new," "add," and "remove" are key for modifying shapes.
===== Rollback Bar=====
Allows reverting to previous steps in the design, useful for troubleshooting and understanding the sequence of operations.
===== Using Constraints =====
Applying constraints like coincident, perpendicular, and parallel defines precise relationships, making the design robust and easier to modify.
213b283c768375dee4274cc1e3e0e7da000a94da
1534
1533
2024-06-18T22:33:24Z
Vrtnis
21
wikitext
text/x-wiki
Notes from onshape talk by [[User:Matt]] as a part of [[K-Scale_Lecture_Circuit]]
===== Starting with 2D Sketches=====
Begin CAD designs with 2D sketches on planes, then extrude to create 3D shapes.
===== Dimensioning =====
Proper dimensioning ensures accuracy, transforming sketches from blue (undefined) to black (fully defined).
===== Extrude Tool =====
Converts 2D sketches into 3D objects. Options like "new," "add," and "remove" are key for modifying shapes.
===== Rollback Bar=====
Allows reverting to previous steps in the design, useful for troubleshooting and understanding the sequence of operations.
===== Using Constraints =====
Applying constraints like coincident, perpendicular, and parallel defines precise relationships, making the design robust and easier to modify.
===== Editing sketches and maintaining integrity =====
Changing a sketch updates all dependent features, but deleting a sketch can cause errors, emphasizing careful management of dependencies.
7acff00c8c969b51519f06148b736f23646f4535
1535
1534
2024-06-18T22:33:39Z
Vrtnis
21
wikitext
text/x-wiki
Notes from onshape talk by [[User:Matt]] as a part of [[K-Scale_Lecture_Circuit]]
===== Starting with 2D Sketches=====
Begin CAD designs with 2D sketches on planes, then extrude to create 3D shapes.
===== Dimensioning =====
Proper dimensioning ensures accuracy, transforming sketches from blue (undefined) to black (fully defined).
===== Extrude Tool =====
Converts 2D sketches into 3D objects. Options like "new," "add," and "remove" are key for modifying shapes.
===== Rollback Bar=====
Allows reverting to previous steps in the design, useful for troubleshooting and understanding the sequence of operations.
===== Using Constraints =====
Applying constraints like coincident, perpendicular, and parallel defines precise relationships, making the design robust and easier to modify.
===== Editing sketches and maintaining integrity =====
Changing a sketch updates all dependent features, but deleting a sketch can cause errors, emphasizing careful management of dependencies.
bfd8bff26e1aac7e364df3a13c68054e77505eab
1536
1535
2024-06-18T22:40:00Z
Vrtnis
21
wikitext
text/x-wiki
Notes from onshape talk by [[User:Matt]] as a part of [[K-Scale_Lecture_Circuit]]
===== Starting with 2D Sketches=====
Begin CAD designs with 2D sketches on planes, then extrude to create 3D shapes.
===== Dimensioning =====
Proper dimensioning ensures accuracy, transforming sketches from blue (undefined) to black (fully defined).
===== Extrude Tool =====
Converts 2D sketches into 3D objects. Options like "new," "add," and "remove" are key for modifying shapes.
===== Rollback Bar=====
Allows reverting to previous steps in the design, useful for troubleshooting and understanding the sequence of operations.
===== Using Constraints =====
Applying constraints like coincident, perpendicular, and parallel defines precise relationships, making the design robust and easier to modify.
===== Editing sketches and maintaining integrity =====
Changing a sketch updates all dependent features, but deleting a sketch can cause errors, emphasizing careful management of dependencies.
===== Extruding Specific Shapes =====
Select specific parts of a sketch to extrude, allowing for complex and controlled modeling.
===== Symmetric Extrusion =====
Extrude a feature equally in both directions from a central plane for balanced and symmetrical parts.
===== Multiple Parts Management =====
Avoid creating multiple disconnected parts within a single part studio. Use assemblies for combining multiple parts.
55b14f97b8045d9d218b8e75755f1a68b54c372b
1537
1536
2024-06-18T22:41:46Z
Vrtnis
21
wikitext
text/x-wiki
Notes from onshape talk by [[User:Matt]] as a part of [[K-Scale_Lecture_Circuit]]
===== Starting with 2D Sketches=====
Begin CAD designs with 2D sketches on planes, then extrude to create 3D shapes.
===== Dimensioning =====
Proper dimensioning ensures accuracy, transforming sketches from blue (undefined) to black (fully defined).
===== Extrude Tool =====
Converts 2D sketches into 3D objects. Options like "new," "add," and "remove" are key for modifying shapes.
===== Rollback Bar=====
Allows reverting to previous steps in the design, useful for troubleshooting and understanding the sequence of operations.
===== Using Constraints =====
Applying constraints like coincident, perpendicular, and parallel defines precise relationships, making the design robust and easier to modify.
===== Editing sketches and maintaining integrity =====
Changing a sketch updates all dependent features, but deleting a sketch can cause errors, emphasizing careful management of dependencies.
===== Extruding Specific Shapes =====
Select specific parts of a sketch to extrude, allowing for complex and controlled modeling.
===== Symmetric Extrusion =====
Extrude a feature equally in both directions from a central plane for balanced and symmetrical parts.
===== Multiple Parts Management =====
Avoid creating multiple disconnected parts within a single part studio. Use assemblies for combining multiple parts.
===== Extruding Closed Shapes =====
Only closed shapes can be extruded.
Converting lines to construction lines helps define closed shapes without affecting the extrusion process.
===== Boolean Operations=====
Combine multiple parts into one using Boolean operations, though designing parts separately is often advised for clarity and simplicity.
===== Trim Tool =====
The Trim tool helps clean up sketches by removing unwanted lines and intersections, aiding in creating complex shapes.
3e4134822bfafccfae003dccb7db7a87123cac84
1538
1537
2024-06-18T22:42:00Z
Vrtnis
21
wikitext
text/x-wiki
Notes from onshape talk by [[User:Matt]] as a part of [[K-Scale_Lecture_Circuit]]
===== Starting with 2D Sketches=====
Begin CAD designs with 2D sketches on planes, then extrude to create 3D shapes.
===== Dimensioning =====
Proper dimensioning ensures accuracy, transforming sketches from blue (undefined) to black (fully defined).
===== Extrude Tool =====
Converts 2D sketches into 3D objects. Options like "new," "add," and "remove" are key for modifying shapes.
===== Rollback Bar=====
Allows reverting to previous steps in the design, useful for troubleshooting and understanding the sequence of operations.
===== Using Constraints =====
Applying constraints like coincident, perpendicular, and parallel defines precise relationships, making the design robust and easier to modify.
===== Editing sketches and maintaining integrity =====
Changing a sketch updates all dependent features, but deleting a sketch can cause errors, emphasizing careful management of dependencies.
===== Extruding Specific Shapes =====
Select specific parts of a sketch to extrude, allowing for complex and controlled modeling.
===== Symmetric Extrusion =====
Extrude a feature equally in both directions from a central plane for balanced and symmetrical parts.
===== Multiple Parts Management =====
Avoid creating multiple disconnected parts within a single part studio. Use assemblies for combining multiple parts.
===== Extruding Closed Shapes =====
Only closed shapes can be extruded.
Converting lines to construction lines helps define closed shapes without affecting the extrusion process.
===== Boolean Operations=====
Combine multiple parts into one using Boolean operations, though designing parts separately is often advised for clarity and simplicity.
===== Trim Tool =====
The Trim tool helps clean up sketches by removing unwanted lines and intersections, aiding in creating complex shapes.
58c585bffc26242992091abe3b9ff9125d3addad
1542
1538
2024-06-20T00:40:31Z
Vrtnis
21
wikitext
text/x-wiki
Notes from onshape talk by [[User:Matt]] as a part of [[K-Scale_Lecture_Circuit]]
====== Starting with 2D Sketches ======
Begin CAD designs with 2D sketches on planes, then extrude to create 3D shapes.
====== Dimensioning ======
Proper dimensioning ensures accuracy, transforming sketches from blue (undefined) to black (fully defined).
====== Extrude Tool ======
Converts 2D sketches into 3D objects. Options like "new," "add," and "remove" are key for modifying shapes.
====== Rollback Bar ======
Allows reverting to previous steps in the design, useful for troubleshooting and understanding the sequence of operations.
====== Using Constraints ======
Applying constraints like coincident, perpendicular, and parallel defines precise relationships, making the design robust and easier to modify.
====== Editing Sketches and Maintaining Integrity ======
Changing a sketch updates all dependent features, but deleting a sketch can cause errors, emphasizing careful management of dependencies.
====== Extruding Specific Shapes ======
Select specific parts of a sketch to extrude, allowing for complex and controlled modeling.
====== Symmetric Extrusion ======
Extrude a feature equally in both directions from a central plane for balanced and symmetrical parts.
====== Multiple Parts Management ======
Avoid creating multiple disconnected parts within a single part studio. Use assemblies
====== Extruding Closed Shapes ======
Only closed shapes can be extruded. Converting lines to construction lines helps define closed shapes without affecting the extrusion process.
====== Boolean Operations ======
Combine multiple parts into one using Boolean operations, though designing parts separately is often advised for clarity and simplicity.
====== Trim Tool ======
The Trim tool helps clean up sketches by removing unwanted lines and intersections, aiding in creating complex shapes.
====== Sequential Feature Order ======
The order of features (like planes, sketches, and extrusions) in the feature list is crucial. Moving a feature before its dependencies will cause errors.
====== Advanced Sketch Relationships ======
Besides basic dimensions, you can define angles and even perform arithmetic operations within dimensioning.
This includes converting units directly in the dimension input (e.g., adding inches to millimeters).
====== Fillet Tool ======
Filleting edges not only improves aesthetics by rounding corners but also reduces stress concentrations, making parts stronger.
Filleting can both add and remove material, depending on whether it’s applied to external or internal edges.
====== Creating Assemblies ======
Assemblies allow you to bring multiple parts together and define their interactions.
Parts can be fixed in space or mated using different types of mates (e.g., fasten, revolute).
Fastened mates lock parts together in a specific orientation, while revolute mates allow rotation around a fixed axis.
b9c4d7726478c8ed0b145c1473901324c56ded22
1543
1542
2024-06-20T00:41:02Z
Vrtnis
21
wikitext
text/x-wiki
Notes/highlights from onshape talk by [[User:Matt]] as a part of [[K-Scale_Lecture_Circuit]]
====== Starting with 2D Sketches ======
Begin CAD designs with 2D sketches on planes, then extrude to create 3D shapes.
====== Dimensioning ======
Proper dimensioning ensures accuracy, transforming sketches from blue (undefined) to black (fully defined).
====== Extrude Tool ======
Converts 2D sketches into 3D objects. Options like "new," "add," and "remove" are key for modifying shapes.
====== Rollback Bar ======
Allows reverting to previous steps in the design, useful for troubleshooting and understanding the sequence of operations.
====== Using Constraints ======
Applying constraints like coincident, perpendicular, and parallel defines precise relationships, making the design robust and easier to modify.
====== Editing Sketches and Maintaining Integrity ======
Changing a sketch updates all dependent features, but deleting a sketch can cause errors, emphasizing careful management of dependencies.
====== Extruding Specific Shapes ======
Select specific parts of a sketch to extrude, allowing for complex and controlled modeling.
====== Symmetric Extrusion ======
Extrude a feature equally in both directions from a central plane for balanced and symmetrical parts.
====== Multiple Parts Management ======
Avoid creating multiple disconnected parts within a single part studio. Use assemblies
====== Extruding Closed Shapes ======
Only closed shapes can be extruded. Converting lines to construction lines helps define closed shapes without affecting the extrusion process.
====== Boolean Operations ======
Combine multiple parts into one using Boolean operations, though designing parts separately is often advised for clarity and simplicity.
====== Trim Tool ======
The Trim tool helps clean up sketches by removing unwanted lines and intersections, aiding in creating complex shapes.
====== Sequential Feature Order ======
The order of features (like planes, sketches, and extrusions) in the feature list is crucial. Moving a feature before its dependencies will cause errors.
====== Advanced Sketch Relationships ======
Besides basic dimensions, you can define angles and even perform arithmetic operations within dimensioning.
This includes converting units directly in the dimension input (e.g., adding inches to millimeters).
====== Fillet Tool ======
Filleting edges not only improves aesthetics by rounding corners but also reduces stress concentrations, making parts stronger.
Filleting can both add and remove material, depending on whether it’s applied to external or internal edges.
====== Creating Assemblies ======
Assemblies allow you to bring multiple parts together and define their interactions.
Parts can be fixed in space or mated using different types of mates (e.g., fasten, revolute).
Fastened mates lock parts together in a specific orientation, while revolute mates allow rotation around a fixed axis.
17e40d031aa5c517d196c7a11a9807566f7e58b8
1546
1543
2024-06-20T00:59:56Z
Vrtnis
21
wikitext
text/x-wiki
Notes/highlights from onshape talk by [[User:Matt]] as a part of [[K-Scale_Lecture_Circuit]]
[[File:Onshape notes.png|400px|thumb]]
====== Starting with 2D Sketches ======
Begin CAD designs with 2D sketches on planes, then extrude to create 3D shapes.
====== Dimensioning ======
Proper dimensioning ensures accuracy, transforming sketches from blue (undefined) to black (fully defined).
====== Extrude Tool ======
Converts 2D sketches into 3D objects. Options like "new," "add," and "remove" are key for modifying shapes.
====== Rollback Bar ======
Allows reverting to previous steps in the design, useful for troubleshooting and understanding the sequence of operations.
====== Using Constraints ======
Applying constraints like coincident, perpendicular, and parallel defines precise relationships, making the design robust and easier to modify.
====== Editing Sketches and Maintaining Integrity ======
Changing a sketch updates all dependent features, but deleting a sketch can cause errors, emphasizing careful management of dependencies.
====== Extruding Specific Shapes ======
Select specific parts of a sketch to extrude, allowing for complex and controlled modeling.
====== Symmetric Extrusion ======
Extrude a feature equally in both directions from a central plane for balanced and symmetrical parts.
====== Multiple Parts Management ======
Avoid creating multiple disconnected parts within a single part studio. Use assemblies
====== Extruding Closed Shapes ======
Only closed shapes can be extruded. Converting lines to construction lines helps define closed shapes without affecting the extrusion process.
====== Boolean Operations ======
Combine multiple parts into one using Boolean operations, though designing parts separately is often advised for clarity and simplicity.
====== Trim Tool ======
The Trim tool helps clean up sketches by removing unwanted lines and intersections, aiding in creating complex shapes.
====== Sequential Feature Order ======
The order of features (like planes, sketches, and extrusions) in the feature list is crucial. Moving a feature before its dependencies will cause errors.
====== Advanced Sketch Relationships ======
Besides basic dimensions, you can define angles and even perform arithmetic operations within dimensioning.
This includes converting units directly in the dimension input (e.g., adding inches to millimeters).
====== Fillet Tool ======
Filleting edges not only improves aesthetics by rounding corners but also reduces stress concentrations, making parts stronger.
Filleting can both add and remove material, depending on whether it’s applied to external or internal edges.
====== Creating Assemblies ======
Assemblies allow you to bring multiple parts together and define their interactions.
Parts can be fixed in space or mated using different types of mates (e.g., fasten, revolute).
Fastened mates lock parts together in a specific orientation, while revolute mates allow rotation around a fixed axis.
cf687c6ffdfc08772dd0aeda9f09b857683ddcba
1547
1546
2024-06-20T01:02:32Z
Vrtnis
21
wikitext
text/x-wiki
Notes/highlights from getting started with onshape talk by [[User:Matt]] as a part of [[K-Scale_Lecture_Circuit]]
[[File:Onshape notes.png|400px|thumb]]
====== Starting with 2D Sketches ======
Begin CAD designs with 2D sketches on planes, then extrude to create 3D shapes.
====== Dimensioning ======
Proper dimensioning ensures accuracy, transforming sketches from blue (undefined) to black (fully defined).
====== Extrude Tool ======
Converts 2D sketches into 3D objects. Options like "new," "add," and "remove" are key for modifying shapes.
====== Rollback Bar ======
Allows reverting to previous steps in the design, useful for troubleshooting and understanding the sequence of operations.
====== Using Constraints ======
Applying constraints like coincident, perpendicular, and parallel defines precise relationships, making the design robust and easier to modify.
====== Editing Sketches and Maintaining Integrity ======
Changing a sketch updates all dependent features, but deleting a sketch can cause errors, emphasizing careful management of dependencies.
====== Extruding Specific Shapes ======
Select specific parts of a sketch to extrude, allowing for complex and controlled modeling.
====== Symmetric Extrusion ======
Extrude a feature equally in both directions from a central plane for balanced and symmetrical parts.
====== Multiple Parts Management ======
Avoid creating multiple disconnected parts within a single part studio. Use assemblies
====== Extruding Closed Shapes ======
Only closed shapes can be extruded. Converting lines to construction lines helps define closed shapes without affecting the extrusion process.
====== Boolean Operations ======
Combine multiple parts into one using Boolean operations, though designing parts separately is often advised for clarity and simplicity.
====== Trim Tool ======
The Trim tool helps clean up sketches by removing unwanted lines and intersections, aiding in creating complex shapes.
====== Sequential Feature Order ======
The order of features (like planes, sketches, and extrusions) in the feature list is crucial. Moving a feature before its dependencies will cause errors.
====== Advanced Sketch Relationships ======
Besides basic dimensions, you can define angles and even perform arithmetic operations within dimensioning.
This includes converting units directly in the dimension input (e.g., adding inches to millimeters).
====== Fillet Tool ======
Filleting edges not only improves aesthetics by rounding corners but also reduces stress concentrations, making parts stronger.
Filleting can both add and remove material, depending on whether it’s applied to external or internal edges.
====== Creating Assemblies ======
Assemblies allow you to bring multiple parts together and define their interactions.
Parts can be fixed in space or mated using different types of mates (e.g., fasten, revolute).
Fastened mates lock parts together in a specific orientation, while revolute mates allow rotation around a fixed axis.
2e8033929f089a6b77b7333fa7f1e46fdde0549f
Nvidia Jetson: Flashing Custom Firmware
0
315
1541
1442
2024-06-19T18:50:45Z
Vedant
24
wikitext
text/x-wiki
= Developing Custom Firmware =
== For the Jetson Orin Nano ==
= Flashing =
Notes:
- Current Design constraints:
- Based off of the availability of parts in JLCPCB. Possibility of parts not being found or existing.
-
- Flashing is done with the flash.sh script through the following command
$ sudo ./flash.sh <board> <rootdev>
where board is the actual board (Jetson-Nano-XX, etc.)
rootdev determines what type of device is being flahed. Use mmcblk0pc1 to flash a local storage device (eMMC or SD card)
- TO begin flashing, put the device into force recovery mode and then press reset.
- Run the flash script using the previous command specified.
Flash using a convenient script:
- To avoid having to specify the rootdev and the board configurations, can use the custom flashing script:
-
Using GPIO Pins to program protocol:
- you can use the rasberry pi libraries to interface with the pins, configuring them to whatever layout that is needed.
- Example: it is possible to direclty interface with the i2c system in the nano by using the linux terminal itself.
Current Game Plan:
- mess around with the programming of the GPIO pins: Figure out if there are ways to choose access teh data that the GPIO pins are Or Receiving.
Test if it is possible to reconfigure the pins on the jetson on the firmware side
Build Time:
- On a single Nvidia Nano, it takes about 45 mins - 1 hour to complete the build. The build is encrypted through RSA. The source is still accessible and every time changes are made to the source files and want ot be reflected, the build files have to be remade. Current goal (Figure out if there's a way to make specific build files to decrease development time).
Notes:
- General requirements to build Linux kernel still apply: e.g. `build-essentials`, `bc`, `libssl-dev`, etc
- Need to install `libssl-dev (depends on whether the certain packages are included when running the script)
Current Approaches to Flashing:
- Decompile an existing DTB file into DTS, make appropriate changes then recompile back.
- Take a flash image that already exists and flash back onto the jetson:
- Caveats: Ensure that the current Jetson is running on Jetpack 5+. If not, you will need to flash 5 on it before flashing it to six. There are instructions online that deal with this setup.
-
029ebe14fc5b684c3129b2d54d2eb7a4176cab59
File:Onshape notes.png
6
334
1545
2024-06-20T00:54:47Z
Vrtnis
21
Onshape screenshot
wikitext
text/x-wiki
== Summary ==
Onshape screenshot
2f0e55cd621066cd8f9a073ff8c14ae02c1e883d
Prismatic VLM REPL
0
335
1548
2024-06-20T22:52:19Z
Vrtnis
21
Created page with "Running the generate.py REPL Script from the OpenVLA repo"
wikitext
text/x-wiki
Running the generate.py REPL Script from the OpenVLA repo
30fddd7f52f22dae33a80c0d44a0a9537becf319
1549
1548
2024-06-20T22:55:47Z
Vrtnis
21
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation is at https://github.com/kscalelabs/openvla
Here are some suggestions to running the generate.py REPL Script from the repo if you want to try out OpenVLA
9fbd5b3ca97f9c5987847c88ccfa817c2d3cbfe4
1550
1549
2024-06-20T22:57:26Z
Vrtnis
21
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Pawel]] is at https://github.com/kscalelabs/openvla
Here are some suggestions to running the generate.py REPL Script from the repo if you want to try out OpenVLA
f4f1dbadb97ce13b3ce936ea3f7e2df0684ec4bd
1551
1550
2024-06-20T22:57:49Z
Vrtnis
21
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
Here are some suggestions to running the generate.py REPL Script from the repo if you want to try out OpenVLA
b01e49501e95aa8cc9b083ae129f70cd3534fdec
1552
1551
2024-06-20T22:58:46Z
Vrtnis
21
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
Here are some suggestions to running the generate.py REPL Script from the repo if you just want to try out OpenVLA.
c612b282efc7763fcac7cda79fb836ec0cd223a3
1553
1552
2024-06-20T23:00:29Z
Vrtnis
21
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
Here are some suggestions to running the generate.py REPL Script from the repo if you just want to try out OpenVLA.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing certain models
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
b4167f4dbec06fa9081088c697adf247db294f97
1554
1553
2024-06-20T23:01:16Z
Vrtnis
21
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
Here are some suggestions to running the generate.py REPL Script from the repo if you just want to try out OpenVLA.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
6a22180ea9eff579bdaa57e1df48498b5338c72f
1555
1554
2024-06-20T23:01:52Z
Vrtnis
21
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
Here are some suggestions to running the generate.py REPL Script from the repo if you just want to try out OpenVLA.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
[[work in progress,need to add screenshots and next steps]]
011ebf9c6f9d6eb362d94c228fbe1ef02e753bcf
1556
1555
2024-06-20T23:02:08Z
Vrtnis
21
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
Here are some suggestions to running the generate.py REPL Script from the repo if you just want to try out OpenVLA.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
''work in progress,need to add screenshots and next steps''
d80754b7435f272aaa61312cf7581185c5f9ffb1
1557
1556
2024-06-20T23:07:11Z
Vrtnis
21
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
Here are some suggestions to run the generate.py REPL Script from the repo if you would like to get started with OpenVLA.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
''work in progress,need to add screenshots and next steps''
9c3a8a8553aae5442081a3abf581c78825e55460
Prismatic VLM REPL
0
335
1558
1557
2024-06-20T23:09:34Z
Vrtnis
21
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo if you would like to get started with OpenVLA.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
''work in progress,need to add screenshots and next steps''
82bde130b70a1b4baf14db2322782ef76b7e450c
1559
1558
2024-06-20T23:14:09Z
Vrtnis
21
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo if you would like to get started with OpenVLA.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
''work in progress,need to add screenshots and next steps''
4c5d10fccf60d2c39710cc427f5c12441ab55ffd
1560
1559
2024-06-20T23:14:54Z
Vrtnis
21
Vrtnis moved page [[OpenVLA]] to [[OpenVLA REPL]]: /*Specificity */
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo if you would like to get started with OpenVLA.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
''work in progress,need to add screenshots and next steps''
4c5d10fccf60d2c39710cc427f5c12441ab55ffd
1562
1560
2024-06-20T23:20:36Z
Vrtnis
21
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the extern folder) if you would like to get started with OpenVLA.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
''work in progress,need to add screenshots and next steps''
3ec73b85dcff93224bdc5edb57b66bcc5d0d144b
1563
1562
2024-06-20T23:22:15Z
Vrtnis
21
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the scripts folder) if you would like to get started with OpenVLA.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
''work in progress,need to add screenshots and next steps''
99f464ddfa5cd7ca64ebaed9195ac3c17327fdfe
1564
1563
2024-06-20T23:22:28Z
Vrtnis
21
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the '''scripts''' folder) if you would like to get started with OpenVLA.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
''work in progress,need to add screenshots and next steps''
5e6b30f1aba4e1bc477af59f68e10e39455637a9
1565
1564
2024-06-20T23:28:11Z
Vrtnis
21
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the '''scripts''' folder) if you would like to get started with OpenVLA.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
== Starting REPL mode ==
Then, run generate.py. The script starts by initializing the generation playground with the Prismatic model prism-dinosiglip+7b.
The model prism-dinosiglip+7b is downloaded from the Hugging Face Hub.
The model configuration is found and then the model is loaded with the following components:
Vision Backbone: dinosiglip-vit-so-384px
Language Model (LLM) Backbone: llama2-7b-pure (this is also where the hf token comes into play)
Architecture Specifier: no-align+fused-gelu-mlp
Checkpoint Path: The model checkpoint is loaded from a specific path in the cache.
''After loading the model, the script enters a REPL mode, allowing the user to interact with the model. The REPL mode provides a default generation setup and waits for user inputs.''
''work in progress,need to add screenshots and next steps''
920bdc3c9e86e4947ac96b4bb298aab13518398b
1567
1565
2024-06-20T23:32:06Z
Vrtnis
21
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the '''scripts''' folder) if you would like to get started with OpenVLA.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
== Starting REPL mode ==
Then, run generate.py. The script starts by initializing the generation playground with the Prismatic model prism-dinosiglip+7b.
The model prism-dinosiglip+7b is downloaded from the Hugging Face Hub.
The model configuration is found and then the model is loaded with the following components:
Vision Backbone: dinosiglip-vit-so-384px
Language Model (LLM) Backbone: llama2-7b-pure (this is also where the hf token comes into play)
Architecture Specifier: no-align+fused-gelu-mlp
Checkpoint Path: The model checkpoint is loaded from a specific path in the cache.
''After loading the model, the script enters a REPL mode, allowing the user to interact with the model. The REPL mode provides a default generation setup and waits for user inputs.''
[[File:Openvla1.png|800px|openvla models]]
''work in progress,need to add screenshots and next steps''
beddcecc71daa5a01de0f577475e80db70cd3022
1568
1567
2024-06-20T23:32:33Z
Vrtnis
21
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the '''scripts''' folder) if you would like to get started with OpenVLA.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
== Starting REPL mode ==
Then, run generate.py. The script starts by initializing the generation playground with the Prismatic model prism-dinosiglip+7b.
The model prism-dinosiglip+7b is downloaded from the Hugging Face Hub.
The model configuration is found and then the model is loaded with the following components:
Vision Backbone: dinosiglip-vit-so-384px
Language Model (LLM) Backbone: llama2-7b-pure (this is also where the hf token comes into play)
Architecture Specifier: no-align+fused-gelu-mlp
Checkpoint Path: The model checkpoint is loaded from a specific path in the cache.
You should see this in your terminal:
[[File:Openvla1.png|800px|openvla models]]
''After loading the model, the script enters a REPL mode, allowing the user to interact with the model. The REPL mode provides a default generation setup and waits for user inputs.''
''work in progress,need to add screenshots and next steps''
bc3c5564a457da3cefdfa2542876e8717b32c0fa
1570
1568
2024-06-20T23:46:06Z
Vrtnis
21
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the '''scripts''' folder) if you would like to get started with OpenVLA.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
[[File:Coke can2.png|800px|Can pickup task]]
== Starting REPL mode ==
Then, run generate.py. The script starts by initializing the generation playground with the Prismatic model prism-dinosiglip+7b.
The model prism-dinosiglip+7b is downloaded from the Hugging Face Hub.
The model configuration is found and then the model is loaded with the following components:
Vision Backbone: dinosiglip-vit-so-384px
Language Model (LLM) Backbone: llama2-7b-pure (this is also where the hf token comes into play)
Architecture Specifier: no-align+fused-gelu-mlp
Checkpoint Path: The model checkpoint is loaded from a specific path in the cache.
You should see this in your terminal:
[[File:Openvla1.png|800px|openvla models]]
''After loading the model, the script enters a REPL mode, allowing the user to interact with the model. The REPL mode provides a default generation setup and waits for user inputs.''
''work in progress,need to add screenshots and next steps''
4c2f843d9ef3fe2a689a3b45d7d83ba52a77aafe
1571
1570
2024-06-20T23:46:37Z
Vrtnis
21
/* Sample Images for generate.py REPL */
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the '''scripts''' folder) if you would like to get started with OpenVLA.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
[[File:Coke can2.png|400px|Can pickup task]]
== Starting REPL mode ==
Then, run generate.py. The script starts by initializing the generation playground with the Prismatic model prism-dinosiglip+7b.
The model prism-dinosiglip+7b is downloaded from the Hugging Face Hub.
The model configuration is found and then the model is loaded with the following components:
Vision Backbone: dinosiglip-vit-so-384px
Language Model (LLM) Backbone: llama2-7b-pure (this is also where the hf token comes into play)
Architecture Specifier: no-align+fused-gelu-mlp
Checkpoint Path: The model checkpoint is loaded from a specific path in the cache.
You should see this in your terminal:
[[File:Openvla1.png|800px|openvla models]]
''After loading the model, the script enters a REPL mode, allowing the user to interact with the model. The REPL mode provides a default generation setup and waits for user inputs.''
''work in progress,need to add screenshots and next steps''
ddcb0aef4b803ca012acf21991b6c782247c9988
1572
1571
2024-06-20T23:46:48Z
Vrtnis
21
/* Sample Images for generate.py REPL */
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the '''scripts''' folder) if you would like to get started with OpenVLA.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
[[File:Coke can2.png|400px|Can pickup task]]
== Starting REPL mode ==
Then, run generate.py. The script starts by initializing the generation playground with the Prismatic model prism-dinosiglip+7b.
The model prism-dinosiglip+7b is downloaded from the Hugging Face Hub.
The model configuration is found and then the model is loaded with the following components:
Vision Backbone: dinosiglip-vit-so-384px
Language Model (LLM) Backbone: llama2-7b-pure (this is also where the hf token comes into play)
Architecture Specifier: no-align+fused-gelu-mlp
Checkpoint Path: The model checkpoint is loaded from a specific path in the cache.
You should see this in your terminal:
[[File:Openvla1.png|800px|openvla models]]
''After loading the model, the script enters a REPL mode, allowing the user to interact with the model. The REPL mode provides a default generation setup and waits for user inputs.''
''work in progress,need to add screenshots and next steps''
a457c27f1f41e0b0abe44b3467abf6e6d1b09327
1573
1572
2024-06-20T23:48:30Z
Vrtnis
21
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the '''scripts''' folder) if you would like to get started with OpenVLA.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
[[File:Coke can2.png|400px|Can pickup task]]
== Starting REPL mode ==
Then, run generate.py. The script starts by initializing the generation playground with the Prismatic model prism-dinosiglip+7b.
The model prism-dinosiglip+7b is downloaded from the Hugging Face Hub.
The model configuration is found and then the model is loaded with the following components:
Vision Backbone: dinosiglip-vit-so-384px
Language Model (LLM) Backbone: llama2-7b-pure (this is also where the hf token comes into play)
Architecture Specifier: no-align+fused-gelu-mlp
Checkpoint Path: The model checkpoint is loaded from a specific path in the cache.
You should see this in your terminal:
[[File:Openvla1.png|800px|openvla models]]
''After loading the model, the script enters a REPL mode, allowing the user to interact with the model. The REPL mode provides a default generation setup and waits for user inputs.''
Basically, the generate.py script runs a REPL that allows users to interactively test generating outputs from the Prismatic model prism-dinosiglip+7b. Upon running the script, users can enter commands in the REPL prompt:
type (i) to load a new local image by specifying its path,
(p) to update the prompt template for generating outputs,
(q) to quit the REPL, or directly input a prompt to generate a response based on the loaded image and the specified prompt.
''work in progress,need to add screenshots and next steps''
6535b2b956f0d37fb4ac16c40008c652ca1a5c65
1574
1573
2024-06-21T00:31:16Z
Vrtnis
21
Vrtnis moved page [[OpenVLA REPL]] to [[Prismatic VLM REPL]]
wikitext
text/x-wiki
The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the '''scripts''' folder) if you would like to get started with OpenVLA.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
[[File:Coke can2.png|400px|Can pickup task]]
== Starting REPL mode ==
Then, run generate.py. The script starts by initializing the generation playground with the Prismatic model prism-dinosiglip+7b.
The model prism-dinosiglip+7b is downloaded from the Hugging Face Hub.
The model configuration is found and then the model is loaded with the following components:
Vision Backbone: dinosiglip-vit-so-384px
Language Model (LLM) Backbone: llama2-7b-pure (this is also where the hf token comes into play)
Architecture Specifier: no-align+fused-gelu-mlp
Checkpoint Path: The model checkpoint is loaded from a specific path in the cache.
You should see this in your terminal:
[[File:Openvla1.png|800px|openvla models]]
''After loading the model, the script enters a REPL mode, allowing the user to interact with the model. The REPL mode provides a default generation setup and waits for user inputs.''
Basically, the generate.py script runs a REPL that allows users to interactively test generating outputs from the Prismatic model prism-dinosiglip+7b. Upon running the script, users can enter commands in the REPL prompt:
type (i) to load a new local image by specifying its path,
(p) to update the prompt template for generating outputs,
(q) to quit the REPL, or directly input a prompt to generate a response based on the loaded image and the specified prompt.
''work in progress,need to add screenshots and next steps''
6535b2b956f0d37fb4ac16c40008c652ca1a5c65
1576
1574
2024-06-21T00:32:08Z
Vrtnis
21
wikitext
text/x-wiki
Prismatic VLM is the project upon which OpenVLA is based. The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the '''scripts''' folder) if you would like to get started with OpenVLA.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
[[File:Coke can2.png|400px|Can pickup task]]
== Starting REPL mode ==
Then, run generate.py. The script starts by initializing the generation playground with the Prismatic model prism-dinosiglip+7b.
The model prism-dinosiglip+7b is downloaded from the Hugging Face Hub.
The model configuration is found and then the model is loaded with the following components:
Vision Backbone: dinosiglip-vit-so-384px
Language Model (LLM) Backbone: llama2-7b-pure (this is also where the hf token comes into play)
Architecture Specifier: no-align+fused-gelu-mlp
Checkpoint Path: The model checkpoint is loaded from a specific path in the cache.
You should see this in your terminal:
[[File:Openvla1.png|800px|openvla models]]
''After loading the model, the script enters a REPL mode, allowing the user to interact with the model. The REPL mode provides a default generation setup and waits for user inputs.''
Basically, the generate.py script runs a REPL that allows users to interactively test generating outputs from the Prismatic model prism-dinosiglip+7b. Upon running the script, users can enter commands in the REPL prompt:
type (i) to load a new local image by specifying its path,
(p) to update the prompt template for generating outputs,
(q) to quit the REPL, or directly input a prompt to generate a response based on the loaded image and the specified prompt.
''work in progress,need to add screenshots and next steps''
44e653d5e94d93f3eb32b9bae00383f987551b50
1577
1576
2024-06-21T00:32:28Z
Vrtnis
21
/* REPL Script Guide */
wikitext
text/x-wiki
Prismatic VLM is the project upon which OpenVLA is based. The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the '''scripts''' folder) if you would like to get started.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
[[File:Coke can2.png|400px|Can pickup task]]
== Starting REPL mode ==
Then, run generate.py. The script starts by initializing the generation playground with the Prismatic model prism-dinosiglip+7b.
The model prism-dinosiglip+7b is downloaded from the Hugging Face Hub.
The model configuration is found and then the model is loaded with the following components:
Vision Backbone: dinosiglip-vit-so-384px
Language Model (LLM) Backbone: llama2-7b-pure (this is also where the hf token comes into play)
Architecture Specifier: no-align+fused-gelu-mlp
Checkpoint Path: The model checkpoint is loaded from a specific path in the cache.
You should see this in your terminal:
[[File:Openvla1.png|800px|openvla models]]
''After loading the model, the script enters a REPL mode, allowing the user to interact with the model. The REPL mode provides a default generation setup and waits for user inputs.''
Basically, the generate.py script runs a REPL that allows users to interactively test generating outputs from the Prismatic model prism-dinosiglip+7b. Upon running the script, users can enter commands in the REPL prompt:
type (i) to load a new local image by specifying its path,
(p) to update the prompt template for generating outputs,
(q) to quit the REPL, or directly input a prompt to generate a response based on the loaded image and the specified prompt.
''work in progress,need to add screenshots and next steps''
299b323af6d44cccc06f13c2ac88ed547d4f4400
1578
1577
2024-06-21T00:34:31Z
Vrtnis
21
wikitext
text/x-wiki
Prismatic VLM is the project upon which OpenVLA is based. The generate script is available in the OpenVLA repo as well but is essentially using Prismatic (see this https://github.com/openvla/openvla/issues/5). The K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== Prismatic REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the '''scripts''' folder) if you would like to get started.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
[[File:Coke can2.png|400px|Can pickup task]]
== Starting REPL mode ==
Then, run generate.py. The script starts by initializing the generation playground with the Prismatic model prism-dinosiglip+7b.
The model prism-dinosiglip+7b is downloaded from the Hugging Face Hub.
The model configuration is found and then the model is loaded with the following components:
Vision Backbone: dinosiglip-vit-so-384px
Language Model (LLM) Backbone: llama2-7b-pure (this is also where the hf token comes into play)
Architecture Specifier: no-align+fused-gelu-mlp
Checkpoint Path: The model checkpoint is loaded from a specific path in the cache.
You should see this in your terminal:
[[File:Openvla1.png|800px|openvla models]]
''After loading the model, the script enters a REPL mode, allowing the user to interact with the model. The REPL mode provides a default generation setup and waits for user inputs.''
Basically, the generate.py script runs a REPL that allows users to interactively test generating outputs from the Prismatic model prism-dinosiglip+7b. Upon running the script, users can enter commands in the REPL prompt:
type (i) to load a new local image by specifying its path,
(p) to update the prompt template for generating outputs,
(q) to quit the REPL, or directly input a prompt to generate a response based on the loaded image and the specified prompt.
''work in progress,need to add screenshots and next steps''
e58c3e81d65cf63a7d5d7f4f8a20a41674a414d7
1579
1578
2024-06-21T00:36:30Z
Vrtnis
21
wikitext
text/x-wiki
Prismatic VLM is the project upon which OpenVLA is based. The generate script is available in the OpenVLA repo as well but is essentially using Prismatic. Note that the Prismatic models generate natural language whereas OpenVLA models were trained to generate robot actions. (see this https://github.com/openvla/openvla/issues/5).
Of note, the K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== Prismatic REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the '''scripts''' folder) if you would like to get started.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
[[File:Coke can2.png|400px|Can pickup task]]
== Starting REPL mode ==
Then, run generate.py. The script starts by initializing the generation playground with the Prismatic model prism-dinosiglip+7b.
The model prism-dinosiglip+7b is downloaded from the Hugging Face Hub.
The model configuration is found and then the model is loaded with the following components:
Vision Backbone: dinosiglip-vit-so-384px
Language Model (LLM) Backbone: llama2-7b-pure (this is also where the hf token comes into play)
Architecture Specifier: no-align+fused-gelu-mlp
Checkpoint Path: The model checkpoint is loaded from a specific path in the cache.
You should see this in your terminal:
[[File:Openvla1.png|800px|openvla models]]
''After loading the model, the script enters a REPL mode, allowing the user to interact with the model. The REPL mode provides a default generation setup and waits for user inputs.''
Basically, the generate.py script runs a REPL that allows users to interactively test generating outputs from the Prismatic model prism-dinosiglip+7b. Upon running the script, users can enter commands in the REPL prompt:
type (i) to load a new local image by specifying its path,
(p) to update the prompt template for generating outputs,
(q) to quit the REPL, or directly input a prompt to generate a response based on the loaded image and the specified prompt.
''work in progress,need to add screenshots and next steps''
631cc30ff01d22aeba7b13a3dd6d3c916bede346
1581
1579
2024-06-21T00:40:07Z
Vrtnis
21
wikitext
text/x-wiki
Prismatic VLM is the project upon which OpenVLA is based. The generate script is available in the OpenVLA repo as well but is essentially using Prismatic. Note that the Prismatic models generate natural language whereas OpenVLA models were trained to generate robot actions. (see this https://github.com/openvla/openvla/issues/5).
Of note, the K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== Prismatic REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the '''scripts''' folder) if you would like to get started.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
[[File:Coke can2.png|400px|Can pickup task]]
== Starting REPL mode ==
Then, run generate.py. The script starts by initializing the generation playground with the Prismatic model prism-dinosiglip+7b.
The model prism-dinosiglip+7b is downloaded from the Hugging Face Hub.
The model configuration is found and then the model is loaded with the following components:
Vision Backbone: dinosiglip-vit-so-384px
Language Model (LLM) Backbone: llama2-7b-pure (this is also where the hf token comes into play)
Architecture Specifier: no-align+fused-gelu-mlp
Checkpoint Path: The model checkpoint is loaded from a specific path in the cache.
You should see this in your terminal:
[[File:Openvla1.png|800px|openvla models]]
''After loading the model, the script enters a REPL mode, allowing the user to interact with the model. The REPL mode provides a default generation setup and waits for user inputs.''
Basically, the generate.py script runs a REPL that allows users to interactively test generating outputs from the Prismatic model prism-dinosiglip+7b. Upon running the script, users can enter commands in the REPL prompt:
type (i) to load a new local image by specifying its path,
(p) to update the prompt template for generating outputs,
(q) to quit the REPL, or directly input a prompt to generate a response based on the loaded image and the specified prompt.
[[File:Prismatic chat1.png|800px|openvla models]]
4dd37c7f5f087e8b7207ee24178acf4b1ba734ea
1582
1581
2024-06-21T00:40:33Z
Vrtnis
21
wikitext
text/x-wiki
Prismatic VLM is the project upon which OpenVLA is based. The generate script is available in the OpenVLA repo as well but is essentially using Prismatic. Note that the Prismatic models generate natural language whereas OpenVLA models were trained to generate robot actions. (see this https://github.com/openvla/openvla/issues/5).
Of note, the K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== Prismatic REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the '''scripts''' folder) if you would like to get started.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
[[File:Coke can2.png|400px|Can pickup task]]
== Starting REPL mode ==
Then, run generate.py. The script starts by initializing the generation playground with the Prismatic model prism-dinosiglip+7b.
The model prism-dinosiglip+7b is downloaded from the Hugging Face Hub.
The model configuration is found and then the model is loaded with the following components:
Vision Backbone: dinosiglip-vit-so-384px
Language Model (LLM) Backbone: llama2-7b-pure (this is also where the hf token comes into play)
Architecture Specifier: no-align+fused-gelu-mlp
Checkpoint Path: The model checkpoint is loaded from a specific path in the cache.
You should see this in your terminal:
[[File:Openvla1.png|800px|prismatic models]]
''After loading the model, the script enters a REPL mode, allowing the user to interact with the model. The REPL mode provides a default generation setup and waits for user inputs.''
Basically, the generate.py script runs a REPL that allows users to interactively test generating outputs from the Prismatic model prism-dinosiglip+7b. Upon running the script, users can enter commands in the REPL prompt:
type (i) to load a new local image by specifying its path,
(p) to update the prompt template for generating outputs,
(q) to quit the REPL, or directly input a prompt to generate a response based on the loaded image and the specified prompt.
[[File:Prismatic chat1.png|800px|prismatic chat]]
4925c3f7854325bbe62450c724ad8e57d5acefe6
1583
1582
2024-06-21T00:42:41Z
Vrtnis
21
wikitext
text/x-wiki
Prismatic VLM is the project upon which OpenVLA is based. The generate.py REPL script is available in the OpenVLA repo as well but is essentially using Prismatic models. Note that the Prismatic models generate natural language whereas OpenVLA models were trained to generate robot actions. (see this https://github.com/openvla/openvla/issues/5).
Of note, the K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== Prismatic REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the '''scripts''' folder) if you would like to get started.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
[[File:Coke can2.png|400px|Can pickup task]]
== Starting REPL mode ==
Then, run generate.py. The script starts by initializing the generation playground with the Prismatic model prism-dinosiglip+7b.
The model prism-dinosiglip+7b is downloaded from the Hugging Face Hub.
The model configuration is found and then the model is loaded with the following components:
Vision Backbone: dinosiglip-vit-so-384px
Language Model (LLM) Backbone: llama2-7b-pure (this is also where the hf token comes into play)
Architecture Specifier: no-align+fused-gelu-mlp
Checkpoint Path: The model checkpoint is loaded from a specific path in the cache.
You should see this in your terminal:
[[File:Openvla1.png|800px|prismatic models]]
''After loading the model, the script enters a REPL mode, allowing the user to interact with the model. The REPL mode provides a default generation setup and waits for user inputs.''
Basically, the generate.py script runs a REPL that allows users to interactively test generating outputs from the Prismatic model prism-dinosiglip+7b. Upon running the script, users can enter commands in the REPL prompt:
type (i) to load a new local image by specifying its path,
(p) to update the prompt template for generating outputs,
(q) to quit the REPL, or directly input a prompt to generate a response based on the loaded image and the specified prompt.
[[File:Prismatic chat1.png|800px|prismatic chat]]
a4ca1271313ea63f39c91062165a7e55f4b50c8e
1584
1583
2024-06-21T00:44:27Z
Vrtnis
21
wikitext
text/x-wiki
[https://github.com/TRI-ML/prismatic-vlms Prismatic VLM] is the project upon which OpenVLA is based. The generate.py REPL script is available in the OpenVLA repo as well but is essentially using Prismatic models. Note that the Prismatic models generate natural language whereas OpenVLA models were trained to generate robot actions. (see this https://github.com/openvla/openvla/issues/5).
Of note, the K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== Prismatic REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the '''scripts''' folder) if you would like to get started.
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
[[File:Coke can2.png|400px|Can pickup task]]
== Starting REPL mode ==
Then, run generate.py. The script starts by initializing the generation playground with the Prismatic model prism-dinosiglip+7b.
The model prism-dinosiglip+7b is downloaded from the Hugging Face Hub.
The model configuration is found and then the model is loaded with the following components:
Vision Backbone: dinosiglip-vit-so-384px
Language Model (LLM) Backbone: llama2-7b-pure (this is also where the hf token comes into play)
Architecture Specifier: no-align+fused-gelu-mlp
Checkpoint Path: The model checkpoint is loaded from a specific path in the cache.
You should see this in your terminal:
[[File:Openvla1.png|800px|prismatic models]]
''After loading the model, the script enters a REPL mode, allowing the user to interact with the model. The REPL mode provides a default generation setup and waits for user inputs.''
Basically, the generate.py script runs a REPL that allows users to interactively test generating outputs from the Prismatic model prism-dinosiglip+7b. Upon running the script, users can enter commands in the REPL prompt:
type (i) to load a new local image by specifying its path,
(p) to update the prompt template for generating outputs,
(q) to quit the REPL, or directly input a prompt to generate a response based on the loaded image and the specified prompt.
[[File:Prismatic chat1.png|800px|prismatic chat]]
e8d26dbfa6407eea3b8ae019a5bf2a5b0d3411be
1585
1584
2024-06-21T00:51:58Z
Vrtnis
21
wikitext
text/x-wiki
[https://github.com/TRI-ML/prismatic-vlms Prismatic VLM] is the project upon which OpenVLA is based. The generate.py REPL script is available in the OpenVLA repo as well but is essentially using Prismatic models. Note that the Prismatic models generate natural language whereas OpenVLA models were trained to generate robot actions. (see this https://github.com/openvla/openvla/issues/5).
Of note, the K-Scale OpenVLA adaptation by [[User:Paweł]] is at https://github.com/kscalelabs/openvla
== Prismatic REPL Script Guide ==
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the '''scripts''' folder).
== Prerequisites ==
Before running the script, ensure you have the following:
* Python 3.8 or higher installed
* NVIDIA GPU with CUDA support (optional but recommended for faster processing)
* Hugging Face account and token for accessing Meta Lllama
== Setting Up the Environment ==
In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.
Set up Hugging Face token
You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.
Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:
<syntaxhighlight lang="sh">
echo "your_hugging_face_token" > .hf_token
</syntaxhighlight>
== Sample Images for generate.py REPL ==
You can get these by capturing frames or screenshotting rollout videos from <pre> https://openvla.github.io/ </pre>
Make sure the images have an end effector in them.
[[File:Coke can2.png|400px|Can pickup task]]
== Starting REPL mode ==
Then, run generate.py. The script starts by initializing the generation playground with the Prismatic model prism-dinosiglip+7b.
The model prism-dinosiglip+7b is downloaded from the Hugging Face Hub.
The model configuration is found and then the model is loaded with the following components:
Vision Backbone: dinosiglip-vit-so-384px
Language Model (LLM) Backbone: llama2-7b-pure (this is also where the hf token comes into play)
Architecture Specifier: no-align+fused-gelu-mlp
Checkpoint Path: The model checkpoint is loaded from a specific path in the cache.
You should see this in your terminal:
[[File:Openvla1.png|800px|prismatic models]]
''After loading the model, the script enters a REPL mode, allowing the user to interact with the model. The REPL mode provides a default generation setup and waits for user inputs.''
Basically, the generate.py script runs a REPL that allows users to interactively test generating outputs from the Prismatic model prism-dinosiglip+7b. Upon running the script, users can enter commands in the REPL prompt:
type (i) to load a new local image by specifying its path,
(p) to update the prompt template for generating outputs,
(q) to quit the REPL, or directly input a prompt to generate a response based on the loaded image and the specified prompt.
[[File:Prismatic chat1.png|800px|prismatic chat]]
f177201127036d75e49da4d7d06f395b7b17090f
OpenVLA
0
336
1561
2024-06-20T23:14:54Z
Vrtnis
21
Vrtnis moved page [[OpenVLA]] to [[OpenVLA REPL]]: /*Specificity */
wikitext
text/x-wiki
#REDIRECT [[OpenVLA REPL]]
85c57dc05329d23705f911bf5bec838ddcd87c9e
File:Openvla1.png
6
337
1566
2024-06-20T23:29:54Z
Vrtnis
21
OpenVLA screenshot
wikitext
text/x-wiki
== Summary ==
OpenVLA screenshot
60a91a3d9bbbc50f856d00972aaa154282487fb9
File:Coke can2.png
6
338
1569
2024-06-20T23:45:10Z
Vrtnis
21
Robot can picker
wikitext
text/x-wiki
== Summary ==
Robot can picker
609574fbac00716d34451a7cf4199b138acd9176
OpenVLA REPL
0
339
1575
2024-06-21T00:31:17Z
Vrtnis
21
Vrtnis moved page [[OpenVLA REPL]] to [[Prismatic VLM REPL]]
wikitext
text/x-wiki
#REDIRECT [[Prismatic VLM REPL]]
2c47fc4c7bd6dd064d2830406befe89b09972760
File:Prismatic chat1.png
6
340
1580
2024-06-21T00:39:11Z
Vrtnis
21
Prismatic VLM chat
wikitext
text/x-wiki
== Summary ==
Prismatic VLM chat
c611f7639c00cbb76d03968440c4491b21f5eef9
Main Page
0
1
1586
1523
2024-06-21T05:08:56Z
Kun
35
/* List of Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
1bd62ba338b579df22e2095106ba08fe3a56b1d8
1588
1586
2024-06-21T05:16:15Z
Kun
35
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
=== Discord community ===
[https://discord.gg/DnKMa7BQ Click to join discord community]
fc6089d2667a6e5c6cbd5f34c3bd81f653145dc9
1589
1588
2024-06-21T22:31:30Z
Vrtnis
21
/* Discord address */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
=== Discord community ===
[https://discord.gg/k3y9ymM9 Discord]
b57ddb09574113b6961a7533ceb4da67805d775e
GALBOT
0
341
1587
2024-06-21T05:09:47Z
Kun
35
Created page with "[http://galbot.com GALBOT]"
wikitext
text/x-wiki
[http://galbot.com GALBOT]
6b5d25c6f6e42ae09031e151877d43c9bde5974f
K-Scale Lecture Circuit
0
299
1590
1544
2024-06-21T22:59:34Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
|-
| 2024.06.28
| Kenji
| Principles of Power Electronics
|-
| 2024.06.27
| Paweł
| OpenVLA
|-
| 2024.06.26
| Ryan
| Introduction to KiCAD
|-
| 2024.06.21
| Nathan
| Quantization
|-
| 2024.06.20
| Timothy
| Diffusion
|-
| 2024.06.19
| Allen
| Neural Network Loss Functions
|-
| 2024.06.18
| Ben
| Neural network inference on the edge
|-
| 2024.06.17
| Matt
| CAD software deep dive
|-
| 2024.06.14
| Allen
| [https://github.com/karpathy/llm.c llm.c]
|-
| 2024.06.13
| Kenji
| Engineering principles of BLDCs
|-
| 2024.06.12
| Ryan
| 3-phase motors
|-
| 2024.06.11
| Vedant
| CAN Protocol
|-
| 2024.06.10
| Isaac
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech Papers Round 2]
|-
| 2024.06.07
| Ben
| Quantization
|-
| 2024.06.06
| Tom
| Linux Raw
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|-
| 2024.06.04
| Dennis
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech representation learning papers]
|-
| 2024.06.03
| Paweł
| What I (want to) believe in
|-
| 2024.05.30
| Isaac
| VLMs
|-
| 2024.05.29
| Allen
| PPO
|}
[[Category: K-Scale]]
ffe3bd0908400c945dc5794849ea84baac69c505
1591
1590
2024-06-22T00:03:22Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
|-
| 2024.06.28
| Kenji
| Principles of Power Electronics
|-
| 2024.06.27
| Paweł
| OpenVLA
|-
| 2024.06.26
| Ryan
| Introduction to KiCAD
|-
| 2024.06.24
| Dennis
| Live Streaming Protocols
|-
| 2024.06.21
| Nathan
| Quantization
|-
| 2024.06.20
| Timothy
| Diffusion
|-
| 2024.06.19
| Allen
| Neural Network Loss Functions
|-
| 2024.06.18
| Ben
| Neural network inference on the edge
|-
| 2024.06.17
| Matt
| CAD software deep dive
|-
| 2024.06.14
| Allen
| [https://github.com/karpathy/llm.c llm.c]
|-
| 2024.06.13
| Kenji
| Engineering principles of BLDCs
|-
| 2024.06.12
| Ryan
| 3-phase motors
|-
| 2024.06.11
| Vedant
| CAN Protocol
|-
| 2024.06.10
| Isaac
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech Papers Round 2]
|-
| 2024.06.07
| Ben
| Quantization
|-
| 2024.06.06
| Tom
| Linux Raw
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|-
| 2024.06.04
| Dennis
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech representation learning papers]
|-
| 2024.06.03
| Paweł
| What I (want to) believe in
|-
| 2024.05.30
| Isaac
| VLMs
|-
| 2024.05.29
| Allen
| PPO
|}
[[Category: K-Scale]]
d736e5754f6ed51335b3e7ab7db39e169e848160
K-Scale Weekly Progress Updates
0
294
1594
1528
2024-06-27T11:29:22Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Link
|-
| [https://x.com/kscalelabs/status/1804184936574030284 2024.06.21]
|-
| [https://x.com/kscalelabs/status/1801749382167204086 2024.06.14]
|-
| [https://x.com/kscalelabs/status/1799197382208590132 2024.06.07]
|-
| [https://x.com/kscalelabs/status/1796617681455775944 2024.05.31]
|-
| [https://x.com/kscalelabs/status/1794109131214712914 2024.05.24]
|-
| [https://x.com/kscalelabs/status/1791507358780461496 2024.05.17]
|-
| [https://x.com/kscalelabs/status/1788968705378181145 2024.05.10]
|}
[[Category:K-Scale]]
1bbfdd32602ded7c63510cf736b7e2c29be993b4
World Models
0
344
1595
2024-06-27T20:47:10Z
Vrtnis
21
/*Add summary*/
wikitext
text/x-wiki
World models leverage video data to create rich, synthetic datasets, enhancing the learning process for robotic systems. By generating diverse and realistic training scenarios, world models address the challenge of insufficient real-world data, enabling robots to acquire and refine skills more efficiently.
9edd809734231ad6da32e90e3444b7743d2578ff
1596
1595
2024-06-27T21:12:37Z
Vrtnis
21
wikitext
text/x-wiki
World models leverage video data to create rich, synthetic datasets, enhancing the learning process for robotic systems. By generating diverse and realistic training scenarios, world models address the challenge of insufficient real-world data, enabling robots to acquire and refine skills more efficiently.
{| class="wikitable sortable"
! Date !! Title !! Authors !! Summary
|-
| 2017 || [https://arxiv.org/abs/1703.06907 Sim-to-Real Transfer of Robotic Control with Dynamics Randomization] || Josh Tobin et al. || This paper discusses how simulated data can be used to train robotic control policies that transfer well to the real world using dynamics randomization. The concept is to bridge the gap between simulation and real-world data, which is a key aspect of your interest.
|-
| 2017 || [https://arxiv.org/abs/1612.07828 Learning from Simulated and Unsupervised Images through Adversarial Training] || Ashish Shrivastava et al. || This paper presents SimGAN, which refines simulated images to make them more realistic using adversarial training. This technique can be used to enhance the quality of synthetic data for training robotics models.
|-
| 2018 || [https://arxiv.org/abs/1803.10122 World Models] || David Ha and Jürgen Schmidhuber || This paper introduces a concept where an agent builds a compact model of the world and uses it to plan and dream, improving its performance in the real environment. This aligns well with your interest in universal simulators.
|-
| 2020 || [https://arxiv.org/abs/2003.08934 NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis] || Ben Mildenhall et al. || NeRF (Neural Radiance Fields) generates high-fidelity views of complex 3D scenes and can be instrumental in creating synthetic data for robotics. It’s relevant for generating diverse visual environments for training robots.
|-
| 2021 || [https://arxiv.org/abs/2103.11624 Diverse and Admissible Trajectory Forecasting through Multimodal Context Understanding] || Krishna D. Kamath et al. || This work focuses on predicting diverse future trajectories, which is crucial for creating realistic scenarios in robotics simulations.
|-
| 2021 || [https://arxiv.org/abs/1912.06680 Augmenting Reinforcement Learning with Human Videos] || Alex X. Lee et al. || This paper explores the use of human demonstration videos to improve the performance of reinforcement learning agents, which is highly relevant for augmenting datasets in robotics.
|}
eaecffbda377e60a8c2a041d433ff64ccbdc52d3
1597
1596
2024-06-27T21:16:57Z
Vrtnis
21
wikitext
text/x-wiki
World models leverage video data to create rich, synthetic datasets, enhancing the learning process for robotic systems. By generating diverse and realistic training scenarios, world models address the challenge of insufficient real-world data, enabling robots to acquire and refine skills more efficiently.
{| class="wikitable sortable"
! Date !! Title !! Authors !! Summary
|-
| 2017 || [https://arxiv.org/abs/1703.06907 Sim-to-Real Transfer of Robotic Control with Dynamics Randomization] || Josh Tobin et al. || This paper discusses how simulated data can be used to train robotic control policies that transfer well to the real world using dynamics randomization. The concept is to bridge the gap between simulation and real-world data, which is a key aspect of your interest.
|-
| 2017 || [https://arxiv.org/abs/1612.07828 Learning from Simulated and Unsupervised Images through Adversarial Training] || Ashish Shrivastava et al. || This paper presents SimGAN, which refines simulated images to make them more realistic using adversarial training. This technique can be used to enhance the quality of synthetic data for training robotics models.
|-
| 2018 || [https://arxiv.org/abs/1803.10122 World Models] || David Ha and Jürgen Schmidhuber || This paper introduces a concept where an agent builds a compact model of the world and uses it to plan and dream, improving its performance in the real environment. This aligns well with your interest in universal simulators.
|-
| 2020 || [https://arxiv.org/abs/2003.08934 NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis] || Ben Mildenhall et al. || NeRF (Neural Radiance Fields) generates high-fidelity views of complex 3D scenes and can be instrumental in creating synthetic data for robotics. It’s relevant for generating diverse visual environments for training robots.
|-
| 2021 || [https://arxiv.org/abs/2103.11624 Diverse and Admissible Trajectory Forecasting through Multimodal Context Understanding] || Krishna D. Kamath et al. || This work focuses on predicting diverse future trajectories, which is crucial for creating realistic scenarios in robotics simulations.
|-
| 2021 || [https://arxiv.org/abs/1912.06680 Augmenting Reinforcement Learning with Human Videos] || Alex X. Lee et al. || This paper explores the use of human demonstration videos to improve the performance of reinforcement learning agents, which is highly relevant for augmenting datasets in robotics.
|-
| 2024 || [https://arxiv.org/pdf/Real-world_robot_applications_of_foundation_models.pdf Real-world Robot Applications of Foundation Models: A Review] || K Kawaharazuka, T Matsushima et al. || This paper provides an overview of the practical application of foundation models in real-world robotics, including the integration of specific components within existing robot systems.
|-
| 2024 || [https://arxiv.org/pdf/Is_sora_a_world_simulator.pdf Is SORA a World Simulator? A Comprehensive Survey on General World Models and Beyond] || Z Zhu, X Wang, W Zhao, C Min, N Deng, M Dou et al. || This paper surveys the applications of world models in various fields, including robotics, and discusses the potential of the SORA framework as a world simulator.
|-
| 2024 || [https://arxiv.org/abs/2401.00001 Large Language Models for Robotics: Opportunities, Challenges, and Perspectives] || J Wang, Z Wu, Y Li, H Jiang, P Shu, E Shi, H Hu et al. || This paper discusses the opportunities, challenges, and perspectives of using large language models in robotics, focusing on model transparency, robustness, safety, and real-world applicability.
|-
| 2024 || [https://arxiv.org/abs/2401.00002 3D-VLA: A 3D Vision-Language-Action Generative World Model] || H Zhen, X Qiu, P Chen, J Yang, X Yan, Y Du et al. || This paper presents 3D-VLA, a generative world model that combines vision, language, and action to guide robot control and achieve goal objectives.
|-
| 2024 || [https://arxiv.org/abs/2401.00003 A Survey on Robotics with Foundation Models: Toward Embodied AI] || Z Xu, K Wu, J Wen, J Li, N Liu, Z Che, J Tang || This survey explores the integration of foundation models in robotics, addressing safety and interpretation challenges in real-world scenarios, particularly in densely populated environments.
|-
| 2024 || [https://arxiv.org/abs/2401.00004 The Essential Role of Causality in Foundation World Models for Embodied AI] || T Gupta, W Gong, C Ma, N Pawlowski, A Hilmkil et al. || This paper emphasizes the importance of causality in foundation world models for embodied AI, predicting that these models will simplify the introduction of new robots into everyday life.
|}
8539a672c12421071d0d4ed48aa6efc387c3e5be
1598
1597
2024-06-27T21:22:29Z
Vrtnis
21
wikitext
text/x-wiki
World models leverage video data to create rich, synthetic datasets, enhancing the learning process for robotic systems. By generating diverse and realistic training scenarios, world models address the challenge of insufficient real-world data, enabling robots to acquire and refine skills more efficiently.
{| class="wikitable sortable"
! Date !! Title !! Authors !! Summary
|-
| 2017 || [https://arxiv.org/abs/1703.06907 Sim-to-Real Transfer of Robotic Control with Dynamics Randomization] || Josh Tobin et al. || simulated data can be used to train robotic control policies that transfer well to the real world using dynamics randomization, bridging the gap between simulation and real-world data.
|-
| 2017 || [https://arxiv.org/abs/1612.07828 Learning from Simulated and Unsupervised Images through Adversarial Training] || Ashish Shrivastava et al. || technique that refines simulated images to make them more realistic using adversarial training, enhancing the quality of synthetic data for training robotics models.
|-
| 2018 || [https://arxiv.org/abs/1803.10122 World Models] || David Ha and Jürgen Schmidhuber || agent builds a compact model of the world and uses it to plan and dream, improving its performance in real environments. This aligns well with the interest in universal simulators.
|-
| 2020 || [https://arxiv.org/abs/2003.08934 NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis] || Ben Mildenhall et al. || high-fidelity views of complex 3D scenes, instrumental in creating synthetic data for robotics, and relevant for generating diverse visual environments for training robots.
|-
| 2021 || [https://arxiv.org/abs/2103.11624 Diverse and Admissible Trajectory Forecasting through Multimodal Context Understanding] || Krishna D. Kamath et al. || Focuses on predicting diverse future trajectories, crucial for creating realistic scenarios in robotics simulations.
|-
| 2021 || [https://arxiv.org/abs/1912.06680 Augmenting Reinforcement Learning with Human Videos] || Alex X. Lee et al. || Explores the use of human demonstration videos to improve the performance of reinforcement learning agents, which is highly relevant for augmenting datasets in robotics.
|-
| 2024 || [https://arxiv.org/pdf/Real-world_robot_applications_of_foundation_models.pdf Real-world Robot Applications of Foundation Models: A Review] || K Kawaharazuka, T Matsushima et al. || overview of the practical application of foundation models in real-world robotics, including the integration of specific components within existing robot systems.
|-
| 2024 || [https://arxiv.org/pdf/Is_sora_a_world_simulator.pdf Is SORA a World Simulator? A Comprehensive Survey on General World Models and Beyond] || Z Zhu, X Wang, W Zhao, C Min, N Deng, M Dou et al. || surveys the applications of world models in various fields, including robotics, and discusses the potential of the SORA framework as a world simulator.
|-
| 2024 || [https://arxiv.org/abs/2401.00001 Large Language Models for Robotics: Opportunities, Challenges, and Perspectives] || J Wang, Z Wu, Y Li, H Jiang, P Shu, E Shi, H Hu et al. || perspectives of using large language models in robotics, focusing on model transparency, robustness, safety, and real-world applicability.
|-
| 2024 || [https://arxiv.org/abs/2401.00002 3D-VLA: A 3D Vision-Language-Action Generative World Model] || H Zhen, X Qiu, P Chen, J Yang, X Yan, Y Du et al. || Presents 3D-VLA, a generative world model that combines vision, language, and action to guide robot control and achieve goal objectives.
|-
| 2024 || [https://arxiv.org/abs/2401.00003 A Survey on Robotics with Foundation Models: Toward Embodied AI] || Z Xu, K Wu, J Wen, J Li, N Liu, Z Che, J Tang || integration of foundation models in robotics, addressing safety and interpretation challenges in real-world scenarios, particularly in densely populated environments.
|-
| 2024 || [https://arxiv.org/abs/2401.00004 The Essential Role of Causality in Foundation World Models for Embodied AI] || T Gupta, W Gong, C Ma, N Pawlowski, A Hilmkil et al. || importance of causality in foundation world models for embodied AI, predicting that these models will simplify the introduction of new robots into everyday life.
|-
| 2024 || [https://proceedings.neurips.cc/paper/2024/file/abcdefg.pdf Learning World Models with Identifiable Factorization] || Y Liu, B Huang, Z Zhu, H Tian et al. || a world model with identifiable blocks, ensuring the removal of redundancies .
|-
| 2024 || [https://proceedings.neurips.cc/paper/2024/file/hijklmn.pdf Imagine the Unseen World: A Benchmark for Systematic Generalization in Visual World Models] || Y Kim, G Singh, J Park et al. || systematic generalization in vision models and world models.
|}
6823510bded4ec554d65fc708a8cbed34825f13e
1599
1598
2024-06-27T21:22:57Z
Vrtnis
21
Vrtnis moved page [[World Models In Robotics Learning]] to [[World Models]]
wikitext
text/x-wiki
World models leverage video data to create rich, synthetic datasets, enhancing the learning process for robotic systems. By generating diverse and realistic training scenarios, world models address the challenge of insufficient real-world data, enabling robots to acquire and refine skills more efficiently.
{| class="wikitable sortable"
! Date !! Title !! Authors !! Summary
|-
| 2017 || [https://arxiv.org/abs/1703.06907 Sim-to-Real Transfer of Robotic Control with Dynamics Randomization] || Josh Tobin et al. || simulated data can be used to train robotic control policies that transfer well to the real world using dynamics randomization, bridging the gap between simulation and real-world data.
|-
| 2017 || [https://arxiv.org/abs/1612.07828 Learning from Simulated and Unsupervised Images through Adversarial Training] || Ashish Shrivastava et al. || technique that refines simulated images to make them more realistic using adversarial training, enhancing the quality of synthetic data for training robotics models.
|-
| 2018 || [https://arxiv.org/abs/1803.10122 World Models] || David Ha and Jürgen Schmidhuber || agent builds a compact model of the world and uses it to plan and dream, improving its performance in real environments. This aligns well with the interest in universal simulators.
|-
| 2020 || [https://arxiv.org/abs/2003.08934 NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis] || Ben Mildenhall et al. || high-fidelity views of complex 3D scenes, instrumental in creating synthetic data for robotics, and relevant for generating diverse visual environments for training robots.
|-
| 2021 || [https://arxiv.org/abs/2103.11624 Diverse and Admissible Trajectory Forecasting through Multimodal Context Understanding] || Krishna D. Kamath et al. || Focuses on predicting diverse future trajectories, crucial for creating realistic scenarios in robotics simulations.
|-
| 2021 || [https://arxiv.org/abs/1912.06680 Augmenting Reinforcement Learning with Human Videos] || Alex X. Lee et al. || Explores the use of human demonstration videos to improve the performance of reinforcement learning agents, which is highly relevant for augmenting datasets in robotics.
|-
| 2024 || [https://arxiv.org/pdf/Real-world_robot_applications_of_foundation_models.pdf Real-world Robot Applications of Foundation Models: A Review] || K Kawaharazuka, T Matsushima et al. || overview of the practical application of foundation models in real-world robotics, including the integration of specific components within existing robot systems.
|-
| 2024 || [https://arxiv.org/pdf/Is_sora_a_world_simulator.pdf Is SORA a World Simulator? A Comprehensive Survey on General World Models and Beyond] || Z Zhu, X Wang, W Zhao, C Min, N Deng, M Dou et al. || surveys the applications of world models in various fields, including robotics, and discusses the potential of the SORA framework as a world simulator.
|-
| 2024 || [https://arxiv.org/abs/2401.00001 Large Language Models for Robotics: Opportunities, Challenges, and Perspectives] || J Wang, Z Wu, Y Li, H Jiang, P Shu, E Shi, H Hu et al. || perspectives of using large language models in robotics, focusing on model transparency, robustness, safety, and real-world applicability.
|-
| 2024 || [https://arxiv.org/abs/2401.00002 3D-VLA: A 3D Vision-Language-Action Generative World Model] || H Zhen, X Qiu, P Chen, J Yang, X Yan, Y Du et al. || Presents 3D-VLA, a generative world model that combines vision, language, and action to guide robot control and achieve goal objectives.
|-
| 2024 || [https://arxiv.org/abs/2401.00003 A Survey on Robotics with Foundation Models: Toward Embodied AI] || Z Xu, K Wu, J Wen, J Li, N Liu, Z Che, J Tang || integration of foundation models in robotics, addressing safety and interpretation challenges in real-world scenarios, particularly in densely populated environments.
|-
| 2024 || [https://arxiv.org/abs/2401.00004 The Essential Role of Causality in Foundation World Models for Embodied AI] || T Gupta, W Gong, C Ma, N Pawlowski, A Hilmkil et al. || importance of causality in foundation world models for embodied AI, predicting that these models will simplify the introduction of new robots into everyday life.
|-
| 2024 || [https://proceedings.neurips.cc/paper/2024/file/abcdefg.pdf Learning World Models with Identifiable Factorization] || Y Liu, B Huang, Z Zhu, H Tian et al. || a world model with identifiable blocks, ensuring the removal of redundancies .
|-
| 2024 || [https://proceedings.neurips.cc/paper/2024/file/hijklmn.pdf Imagine the Unseen World: A Benchmark for Systematic Generalization in Visual World Models] || Y Kim, G Singh, J Park et al. || systematic generalization in vision models and world models.
|}
6823510bded4ec554d65fc708a8cbed34825f13e
1601
1599
2024-06-28T06:33:45Z
Vrtnis
21
/*Update links*/
wikitext
text/x-wiki
World models leverage video data to create rich, synthetic datasets, enhancing the learning process for robotic systems. By generating diverse and realistic training scenarios, world models address the challenge of insufficient real-world data, enabling robots to acquire and refine skills more efficiently.
{| class="wikitable sortable"
! Date !! Title !! Authors !! Summary
|-
| 2017 || [https://arxiv.org/abs/1703.06907 Sim-to-Real Transfer of Robotic Control with Dynamics Randomization] || Josh Tobin et al. || simulated data can be used to train robotic control policies that transfer well to the real world using dynamics randomization, bridging the gap between simulation and real-world data.
|-
| 2017 || [https://arxiv.org/abs/1612.07828 Learning from Simulated and Unsupervised Images through Adversarial Training] || Ashish Shrivastava et al. || technique that refines simulated images to make them more realistic using adversarial training, enhancing the quality of synthetic data for training robotics models.
|-
| 2018 || [https://arxiv.org/abs/1803.10122 World Models] || David Ha and Jürgen Schmidhuber || agent builds a compact model of the world and uses it to plan and dream, improving its performance in real environments. This aligns well with the interest in universal simulators.
|-
| 2020 || [https://arxiv.org/abs/2003.08934 NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis] || Ben Mildenhall et al. || high-fidelity views of complex 3D scenes, instrumental in creating synthetic data for robotics, and relevant for generating diverse visual environments for training robots.
|-
| 2021 || [https://arxiv.org/abs/2103.11624 Diverse and Admissible Trajectory Forecasting through Multimodal Context Understanding] || Krishna D. Kamath et al. || Focuses on predicting diverse future trajectories, crucial for creating realistic scenarios in robotics simulations.
|-
| 2021 || [https://arxiv.org/abs/1912.06680 Augmenting Reinforcement Learning with Human Videos] || Alex X. Lee et al. || Explores the use of human demonstration videos to improve the performance of reinforcement learning agents, which is highly relevant for augmenting datasets in robotics.
|-
| 2024 || [https://arxiv.org/abs/2402.05741 Real-world Robot Applications of Foundation Models: A Review] || K Kawaharazuka, T Matsushima et al. || overview of the practical application of foundation models in real-world robotics, including the integration of specific components within existing robot systems.
|-
| 2024 || [https://arxiv.org/abs/2405.03520 Is SORA a World Simulator? A Comprehensive Survey on General World Models and Beyond] || Z Zhu, X Wang, W Zhao, C Min, N Deng, M Dou et al. || surveys the applications of world models in various fields, including robotics, and discusses the potential of the SORA framework as a world simulator.
|-
| 2024 || [https://arxiv.org/abs/2403.09631 Large Language Models for Robotics: Opportunities, Challenges, and Perspectives] || J Wang, Z Wu, Y Li, H Jiang, P Shu, E Shi, H Hu et al. || perspectives of using large language models in robotics, focusing on model transparency, robustness, safety, and real-world applicability.
|-
| 2024 || [https://arxiv.org/abs/2403.09631 3D-VLA: A 3D Vision-Language-Action Generative World Model] || H Zhen, X Qiu, P Chen, J Yang, X Yan, Y Du et al. || Presents 3D-VLA, a generative world model that combines vision, language, and action to guide robot control and achieve goal objectives.
|-
| 2024 || [https://arxiv.org/abs/2402.02385 A Survey on Robotics with Foundation Models: Toward Embodied AI] || Z Xu, K Wu, J Wen, J Li, N Liu, Z Che, J Tang || integration of foundation models in robotics, addressing safety and interpretation challenges in real-world scenarios, particularly in densely populated environments.
|-
| 2024 || [https://arxiv.org/abs/2402.06665 The Essential Role of Causality in Foundation World Models for Embodied AI] || T Gupta, W Gong, C Ma, N Pawlowski, A Hilmkil et al. || importance of causality in foundation world models for embodied AI, predicting that these models will simplify the introduction of new robots into everyday life.
|-
| 2024 || [https://arxiv.org/abs/2306.06561 Learning World Models with Identifiable Factorization] || Y Liu, B Huang, Z Zhu, H Tian et al. || a world model with identifiable blocks, ensuring the removal of redundancies .
|-
| 2024 || [https://arxiv.org/abs/2311.09064 Imagine the Unseen World: A Benchmark for Systematic Generalization in Visual World Models] || Y Kim, G Singh, J Park et al. || systematic generalization in vision models and world models.
|}
3b6a032eb352d41811b236b7a9f4182f578a943b
1602
1601
2024-06-28T06:35:16Z
Vrtnis
21
wikitext
text/x-wiki
World models leverage video data to create rich, synthetic datasets, enhancing the learning process for robotic systems. By generating diverse and realistic training scenarios, world models address the challenge of insufficient real-world data, enabling robots to acquire and refine skills more efficiently.
{| class="wikitable sortable"
! Date !! Title !! Authors !! Summary
|-
| 2017 || [https://arxiv.org/abs/1703.06907 Sim-to-Real Transfer of Robotic Control with Dynamics Randomization] || Josh Tobin et al. || simulated data can be used to train robotic control policies that transfer well to the real world using dynamics randomization, bridging the gap between simulation and real-world data.
|-
| 2017 || [https://arxiv.org/abs/1612.07828 Learning from Simulated and Unsupervised Images through Adversarial Training] || Ashish Shrivastava et al. || technique that refines simulated images to make them more realistic using adversarial training, enhancing the quality of synthetic data for training robotics models.
|-
| 2018 || [https://arxiv.org/abs/1803.10122 World Models] || David Ha and Jürgen Schmidhuber || agent builds a compact model of the world and uses it to plan and dream, improving its performance in real environments. This aligns well with the interest in universal simulators.
|-
| 2020 || [https://arxiv.org/abs/2003.08934 NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis] || Ben Mildenhall et al. || high-fidelity views of complex 3D scenes, instrumental in creating synthetic data for robotics, and relevant for generating diverse visual environments for training robots.
|-
| 2021 || [https://arxiv.org/abs/2103.11624 Diverse and Admissible Trajectory Forecasting through Multimodal Context Understanding] || Krishna D. Kamath et al. || Focuses on predicting diverse future trajectories, crucial for creating realistic scenarios in robotics simulations.
|-
| 2024 || [https://arxiv.org/abs/2402.05741 Real-world Robot Applications of Foundation Models: A Review] || K Kawaharazuka, T Matsushima et al. || overview of the practical application of foundation models in real-world robotics, including the integration of specific components within existing robot systems.
|-
| 2024 || [https://arxiv.org/abs/2405.03520 Is SORA a World Simulator? A Comprehensive Survey on General World Models and Beyond] || Z Zhu, X Wang, W Zhao, C Min, N Deng, M Dou et al. || surveys the applications of world models in various fields, including robotics, and discusses the potential of the SORA framework as a world simulator.
|-
| 2024 || [https://arxiv.org/abs/2403.09631 Large Language Models for Robotics: Opportunities, Challenges, and Perspectives] || J Wang, Z Wu, Y Li, H Jiang, P Shu, E Shi, H Hu et al. || perspectives of using large language models in robotics, focusing on model transparency, robustness, safety, and real-world applicability.
|-
| 2024 || [https://arxiv.org/abs/2403.09631 3D-VLA: A 3D Vision-Language-Action Generative World Model] || H Zhen, X Qiu, P Chen, J Yang, X Yan, Y Du et al. || Presents 3D-VLA, a generative world model that combines vision, language, and action to guide robot control and achieve goal objectives.
|-
| 2024 || [https://arxiv.org/abs/2402.02385 A Survey on Robotics with Foundation Models: Toward Embodied AI] || Z Xu, K Wu, J Wen, J Li, N Liu, Z Che, J Tang || integration of foundation models in robotics, addressing safety and interpretation challenges in real-world scenarios, particularly in densely populated environments.
|-
| 2024 || [https://arxiv.org/abs/2402.06665 The Essential Role of Causality in Foundation World Models for Embodied AI] || T Gupta, W Gong, C Ma, N Pawlowski, A Hilmkil et al. || importance of causality in foundation world models for embodied AI, predicting that these models will simplify the introduction of new robots into everyday life.
|-
| 2024 || [https://arxiv.org/abs/2306.06561 Learning World Models with Identifiable Factorization] || Y Liu, B Huang, Z Zhu, H Tian et al. || a world model with identifiable blocks, ensuring the removal of redundancies .
|-
| 2024 || [https://arxiv.org/abs/2311.09064 Imagine the Unseen World: A Benchmark for Systematic Generalization in Visual World Models] || Y Kim, G Singh, J Park et al. || systematic generalization in vision models and world models.
|}
dbdb3b157ac35d650b6d87fb22004a8e339627a0
1603
1602
2024-06-28T06:35:53Z
Vrtnis
21
wikitext
text/x-wiki
World models leverage video data to create rich, synthetic datasets, enhancing the learning process for robotic systems. By generating diverse and realistic training scenarios, world models address the challenge of insufficient real-world data, enabling robots to acquire and refine skills more efficiently.
{| class="wikitable sortable"
! Date !! Title !! Authors !! Summary
|-
| 2017 || [https://arxiv.org/abs/1703.06907 Sim-to-Real Transfer of Robotic Control with Dynamics Randomization] || Josh Tobin et al. || simulated data can be used to train robotic control policies that transfer well to the real world using dynamics randomization, bridging the gap between simulation and real-world data.
|-
| 2017 || [https://arxiv.org/abs/1612.07828 Learning from Simulated and Unsupervised Images through Adversarial Training] || Ashish Shrivastava et al. || technique that refines simulated images to make them more realistic using adversarial training, enhancing the quality of synthetic data for training robotics models.
|-
| 2018 || [https://arxiv.org/abs/1803.10122 World Models] || David Ha and Jürgen Schmidhuber || agent builds a compact model of the world and uses it to plan and dream, improving its performance in real environments. This aligns well with the interest in universal simulators.
|-
| 2020 || [https://arxiv.org/abs/2003.08934 NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis] || Ben Mildenhall et al. || high-fidelity views of complex 3D scenes, instrumental in creating synthetic data for robotics, and relevant for generating diverse visual environments for training robots.
|-
| 2024 || [https://arxiv.org/abs/2402.05741 Real-world Robot Applications of Foundation Models: A Review] || K Kawaharazuka, T Matsushima et al. || overview of the practical application of foundation models in real-world robotics, including the integration of specific components within existing robot systems.
|-
| 2024 || [https://arxiv.org/abs/2405.03520 Is SORA a World Simulator? A Comprehensive Survey on General World Models and Beyond] || Z Zhu, X Wang, W Zhao, C Min, N Deng, M Dou et al. || surveys the applications of world models in various fields, including robotics, and discusses the potential of the SORA framework as a world simulator.
|-
| 2024 || [https://arxiv.org/abs/2403.09631 Large Language Models for Robotics: Opportunities, Challenges, and Perspectives] || J Wang, Z Wu, Y Li, H Jiang, P Shu, E Shi, H Hu et al. || perspectives of using large language models in robotics, focusing on model transparency, robustness, safety, and real-world applicability.
|-
| 2024 || [https://arxiv.org/abs/2403.09631 3D-VLA: A 3D Vision-Language-Action Generative World Model] || H Zhen, X Qiu, P Chen, J Yang, X Yan, Y Du et al. || Presents 3D-VLA, a generative world model that combines vision, language, and action to guide robot control and achieve goal objectives.
|-
| 2024 || [https://arxiv.org/abs/2402.02385 A Survey on Robotics with Foundation Models: Toward Embodied AI] || Z Xu, K Wu, J Wen, J Li, N Liu, Z Che, J Tang || integration of foundation models in robotics, addressing safety and interpretation challenges in real-world scenarios, particularly in densely populated environments.
|-
| 2024 || [https://arxiv.org/abs/2402.06665 The Essential Role of Causality in Foundation World Models for Embodied AI] || T Gupta, W Gong, C Ma, N Pawlowski, A Hilmkil et al. || importance of causality in foundation world models for embodied AI, predicting that these models will simplify the introduction of new robots into everyday life.
|-
| 2024 || [https://arxiv.org/abs/2306.06561 Learning World Models with Identifiable Factorization] || Y Liu, B Huang, Z Zhu, H Tian et al. || a world model with identifiable blocks, ensuring the removal of redundancies .
|-
| 2024 || [https://arxiv.org/abs/2311.09064 Imagine the Unseen World: A Benchmark for Systematic Generalization in Visual World Models] || Y Kim, G Singh, J Park et al. || systematic generalization in vision models and world models.
|}
464b4b4d7337479e2b8300824589a1e8a8e90643
1604
1603
2024-06-28T06:36:49Z
Vrtnis
21
wikitext
text/x-wiki
World models leverage video data to create rich, synthetic datasets, enhancing the learning process for robotic systems. By generating diverse and realistic training scenarios, world models address the challenge of insufficient real-world data, enabling robots to acquire and refine skills more efficiently.
{| class="wikitable sortable"
! Date !! Title !! Authors !! Summary
|-
| 2017 || [https://arxiv.org/abs/1612.07828 Learning from Simulated and Unsupervised Images through Adversarial Training] || Ashish Shrivastava et al. || technique that refines simulated images to make them more realistic using adversarial training, enhancing the quality of synthetic data for training robotics models.
|-
| 2018 || [https://arxiv.org/abs/1803.10122 World Models] || David Ha and Jürgen Schmidhuber || agent builds a compact model of the world and uses it to plan and dream, improving its performance in real environments. This aligns well with the interest in universal simulators.
|-
| 2020 || [https://arxiv.org/abs/2003.08934 NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis] || Ben Mildenhall et al. || high-fidelity views of complex 3D scenes, instrumental in creating synthetic data for robotics, and relevant for generating diverse visual environments for training robots.
|-
| 2024 || [https://arxiv.org/abs/2402.05741 Real-world Robot Applications of Foundation Models: A Review] || K Kawaharazuka, T Matsushima et al. || overview of the practical application of foundation models in real-world robotics, including the integration of specific components within existing robot systems.
|-
| 2024 || [https://arxiv.org/abs/2405.03520 Is SORA a World Simulator? A Comprehensive Survey on General World Models and Beyond] || Z Zhu, X Wang, W Zhao, C Min, N Deng, M Dou et al. || surveys the applications of world models in various fields, including robotics, and discusses the potential of the SORA framework as a world simulator.
|-
| 2024 || [https://arxiv.org/abs/2403.09631 Large Language Models for Robotics: Opportunities, Challenges, and Perspectives] || J Wang, Z Wu, Y Li, H Jiang, P Shu, E Shi, H Hu et al. || perspectives of using large language models in robotics, focusing on model transparency, robustness, safety, and real-world applicability.
|-
| 2024 || [https://arxiv.org/abs/2403.09631 3D-VLA: A 3D Vision-Language-Action Generative World Model] || H Zhen, X Qiu, P Chen, J Yang, X Yan, Y Du et al. || Presents 3D-VLA, a generative world model that combines vision, language, and action to guide robot control and achieve goal objectives.
|-
| 2024 || [https://arxiv.org/abs/2402.02385 A Survey on Robotics with Foundation Models: Toward Embodied AI] || Z Xu, K Wu, J Wen, J Li, N Liu, Z Che, J Tang || integration of foundation models in robotics, addressing safety and interpretation challenges in real-world scenarios, particularly in densely populated environments.
|-
| 2024 || [https://arxiv.org/abs/2402.06665 The Essential Role of Causality in Foundation World Models for Embodied AI] || T Gupta, W Gong, C Ma, N Pawlowski, A Hilmkil et al. || importance of causality in foundation world models for embodied AI, predicting that these models will simplify the introduction of new robots into everyday life.
|-
| 2024 || [https://arxiv.org/abs/2306.06561 Learning World Models with Identifiable Factorization] || Y Liu, B Huang, Z Zhu, H Tian et al. || a world model with identifiable blocks, ensuring the removal of redundancies .
|-
| 2024 || [https://arxiv.org/abs/2311.09064 Imagine the Unseen World: A Benchmark for Systematic Generalization in Visual World Models] || Y Kim, G Singh, J Park et al. || systematic generalization in vision models and world models.
|}
3307e5473ba5f1a25c431e4303d84c95c468ce88
1605
1604
2024-06-28T06:39:06Z
Vrtnis
21
wikitext
text/x-wiki
World models leverage video data to create rich, synthetic datasets, enhancing the learning process for robotic systems. By generating diverse and realistic training scenarios, world models address the challenge of insufficient real-world data, enabling robots to acquire and refine skills more efficiently.
{| class="wikitable sortable"
! Date !! Title !! Authors !! Summary
|-
| data-sort-value="2024-01-01" | 2024 || [https://arxiv.org/abs/2402.05741 Real-world Robot Applications of Foundation Models: A Review] || K Kawaharazuka, T Matsushima et al. || overview of the practical application of foundation models in real-world robotics, including the integration of specific components within existing robot systems.
|-
| data-sort-value="2024-01-02" | 2024 || [https://arxiv.org/abs/2405.03520 Is SORA a World Simulator? A Comprehensive Survey on General World Models and Beyond] || Z Zhu, X Wang, W Zhao, C Min, N Deng, M Dou et al. || surveys the applications of world models in various fields, including robotics, and discusses the potential of the SORA framework as a world simulator.
|-
| data-sort-value="2024-01-03" | 2024 || [https://arxiv.org/abs/2403.09631 Large Language Models for Robotics: Opportunities, Challenges, and Perspectives] || J Wang, Z Wu, Y Li, H Jiang, P Shu, E Shi, H Hu et al. || perspectives of using large language models in robotics, focusing on model transparency, robustness, safety, and real-world applicability.
|-
| data-sort-value="2024-01-04" | 2024 || [https://arxiv.org/abs/2403.09631 3D-VLA: A 3D Vision-Language-Action Generative World Model] || H Zhen, X Qiu, P Chen, J Yang, X Yan, Y Du et al. || Presents 3D-VLA, a generative world model that combines vision, language, and action to guide robot control and achieve goal objectives.
|-
| data-sort-value="2024-01-05" | 2024 || [https://arxiv.org/abs/2402.02385 A Survey on Robotics with Foundation Models: Toward Embodied AI] || Z Xu, K Wu, J Wen, J Li, N Liu, Z Che, J Tang || integration of foundation models in robotics, addressing safety and interpretation challenges in real-world scenarios, particularly in densely populated environments.
|-
| data-sort-value="2024-01-06" | 2024 || [https://arxiv.org/abs/2402.06665 The Essential Role of Causality in Foundation World Models for Embodied AI] || T Gupta, W Gong, C Ma, N Pawlowski, A Hilmkil et al. || importance of causality in foundation world models for embodied AI, predicting that these models will simplify the introduction of new robots into everyday life.
|-
| data-sort-value="2024-01-07" | 2024 || [https://arxiv.org/abs/2306.06561 Learning World Models with Identifiable Factorization] || Y Liu, B Huang, Z Zhu, H Tian et al. || a world model with identifiable blocks, ensuring the removal of redundancies.
|-
| data-sort-value="2024-01-08" | 2024 || [https://arxiv.org/abs/2311.09064 Imagine the Unseen World: A Benchmark for Systematic Generalization in Visual World Models] || Y Kim, G Singh, J Park et al. || systematic generalization in vision models and world models.
|-
| data-sort-value="2020-01-01" | 2020 || [https://arxiv.org/abs/2003.08934 NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis] || Ben Mildenhall et al. || high-fidelity views of complex 3D scenes, instrumental in creating synthetic data for robotics, and relevant for generating diverse visual environments for training robots.
|-
| data-sort-value="2018-01-01" | 2018 || [https://arxiv.org/abs/1803.10122 World Models] || David Ha and Jürgen Schmidhuber || agent builds a compact model of the world and uses it to plan and dream, improving its performance in real environments. This aligns well with the interest in universal simulators.
|-
| data-sort-value="2017-01-01" | 2017 || [https://arxiv.org/abs/1612.07828 Learning from Simulated and Unsupervised Images through Adversarial Training] || Ashish Shrivastava et al. || technique that refines simulated images to make them more realistic using adversarial training, enhancing the quality of synthetic data for training robotics models.
|}
923a5036ecb80d79222a108a5db7c530557f9312
1606
1605
2024-06-28T06:40:09Z
Vrtnis
21
wikitext
text/x-wiki
World models leverage video data to create rich, synthetic datasets, enhancing the learning process for robotic systems. By generating diverse and realistic training scenarios, world models address the challenge of insufficient real-world data, enabling robots to acquire and refine skills more efficiently.
{| class="wikitable sortable"
! Date !! Title !! Authors !! Summary
|-
| data-sort-value="2024-01-01" | 2024 || [https://arxiv.org/abs/2402.05741 Real-world Robot Applications of Foundation Models: A Review] || K Kawaharazuka, T Matsushima et al. || overview of the practical application of foundation models in real-world robotics, including the integration of specific components within existing robot systems.
|-
| data-sort-value="2024-01-02" | 2024 || [https://arxiv.org/abs/2405.03520 Is SORA a World Simulator? A Comprehensive Survey on General World Models and Beyond] || Z Zhu, X Wang, W Zhao, C Min, N Deng, M Dou et al. || surveys the applications of world models in various fields, including robotics, and discusses the potential of the SORA framework as a world simulator.
|-
| data-sort-value="2024-01-03" | 2024 || [https://arxiv.org/html/2401.04334v1 Large Language Models for Robotics: Opportunities, Challenges, and Perspectives] || J Wang, Z Wu, Y Li, H Jiang, P Shu, E Shi, H Hu et al. || perspectives of using large language models in robotics, focusing on model transparency, robustness, safety, and real-world applicability.
|-
| data-sort-value="2024-01-04" | 2024 || [https://arxiv.org/abs/2403.09631 3D-VLA: A 3D Vision-Language-Action Generative World Model] || H Zhen, X Qiu, P Chen, J Yang, X Yan, Y Du et al. || Presents 3D-VLA, a generative world model that combines vision, language, and action to guide robot control and achieve goal objectives.
|-
| data-sort-value="2024-01-05" | 2024 || [https://arxiv.org/abs/2402.02385 A Survey on Robotics with Foundation Models: Toward Embodied AI] || Z Xu, K Wu, J Wen, J Li, N Liu, Z Che, J Tang || integration of foundation models in robotics, addressing safety and interpretation challenges in real-world scenarios, particularly in densely populated environments.
|-
| data-sort-value="2024-01-06" | 2024 || [https://arxiv.org/abs/2402.06665 The Essential Role of Causality in Foundation World Models for Embodied AI] || T Gupta, W Gong, C Ma, N Pawlowski, A Hilmkil et al. || importance of causality in foundation world models for embodied AI, predicting that these models will simplify the introduction of new robots into everyday life.
|-
| data-sort-value="2024-01-07" | 2024 || [https://arxiv.org/abs/2306.06561 Learning World Models with Identifiable Factorization] || Y Liu, B Huang, Z Zhu, H Tian et al. || a world model with identifiable blocks, ensuring the removal of redundancies.
|-
| data-sort-value="2024-01-08" | 2024 || [https://arxiv.org/abs/2311.09064 Imagine the Unseen World: A Benchmark for Systematic Generalization in Visual World Models] || Y Kim, G Singh, J Park et al. || systematic generalization in vision models and world models.
|-
| data-sort-value="2020-01-01" | 2020 || [https://arxiv.org/abs/2003.08934 NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis] || Ben Mildenhall et al. || high-fidelity views of complex 3D scenes, instrumental in creating synthetic data for robotics, and relevant for generating diverse visual environments for training robots.
|-
| data-sort-value="2018-01-01" | 2018 || [https://arxiv.org/abs/1803.10122 World Models] || David Ha and Jürgen Schmidhuber || agent builds a compact model of the world and uses it to plan and dream, improving its performance in real environments. This aligns well with the interest in universal simulators.
|-
| data-sort-value="2017-01-01" | 2017 || [https://arxiv.org/abs/1612.07828 Learning from Simulated and Unsupervised Images through Adversarial Training] || Ashish Shrivastava et al. || technique that refines simulated images to make them more realistic using adversarial training, enhancing the quality of synthetic data for training robotics models.
|}
8b1896b5f0c52e4cb39d6d21662b56343598c35c
World Models In Robotics Learning
0
345
1600
2024-06-27T21:22:57Z
Vrtnis
21
Vrtnis moved page [[World Models In Robotics Learning]] to [[World Models]]
wikitext
text/x-wiki
#REDIRECT [[World Models]]
4fd59cdb8f5af78295fbb22177b329ad9e7f6123
Learning algorithms
0
32
1607
1058
2024-06-28T18:57:53Z
EtcetFelix
38
/* Isaac Sim Integration with Isaac Gym */
wikitext
text/x-wiki
Learning algorithms allow to train humanoids to perform different skills such as manipulation or locomotion. Below is an overview of general approaches to training machine learning models for humanoid robots with example [[applications]]. Typically you need a simulator, training framework and machine learning method to train end to end behaviors.
== Physics engines ==
Physics engines are software libraries designed to simulate physical systems in a virtual environment. They are crucial in a variety of fields such as video games, animation, robotics, and engineering simulations. These engines handle the mathematics involved in simulating physical processes like motion, collisions, and fluid dynamics.
For a much more comprehensive overview see [https://simulately.wiki/docs/ Simulately].
===PhysX===
PhysX is a physics engine by NVIDIA used primarily for video games and real-time simulations. It supports rigid body dynamics, cloth simulation, and particle effects, enhancing realism and interactivity in 3D environments.
===MuJoCo===
MuJoCo (Multi-Joint dynamics with Contact) is a physics engine designed for research in robotics and biomechanics. It's known for its speed, accuracy, and ease of use, making it popular for simulating complex systems with robotics and articulated structures.
===Bullet===
Bullet is a physics engine supporting real-time collision detection and multi-physics simulation for VR, games, visual effects, robotics, machine learning
==Simulators==
===[[Isaac Sim]]===
Isaac Sim is NVIDIA’s simulation platform for robotics development. It’s part of their Isaac Robotics platform and uses advanced graphics and AI to create realistic simulations.
==== Isaac Sim Features ====
* '''Advanced Physics Simulation''': Includes PhysX and Flex for detailed simulations of physical interactions like rigid bodies, soft bodies, and fluids.
* '''Photorealistic Rendering''': Uses NVIDIA RTX technology to make environments and objects look incredibly realistic, which is great for tasks that need vision-based learning.
* '''Scalability''': Can simulate multiple robots and environments at the same time, thanks to GPU acceleration, making it handle complex simulations efficiently.
* '''Interoperability''': Works with machine learning frameworks like TensorFlow and PyTorch and supports ROS, so you can easily move from simulation to real-world deployment.
* '''Customizable Environments''': Lets you create and customize simulation environments, including importing 3D models and designing different terrains.
* '''Real-Time Feedback''': Provides real-time monitoring and analytics, giving you insights on how tasks are performing and resource usage.
==== Isaac Sim Applications ====
* '''Robotics Research''': Used in academia and industry to develop and test new algorithms for robot perception, control, and planning.
* '''Autonomous Navigation''': Helps simulate and test navigation algorithms for mobile robots and drones, improving path planning and obstacle avoidance.
* '''Manipulation Tasks''': Supports developing robotic skills like object grasping and assembly tasks, making robots more dexterous and precise.
* '''Industrial Automation''': Helps companies design and validate automation solutions for manufacturing and logistics, boosting efficiency and cutting down on downtime.
* '''Education and Training''': A great educational tool that offers hands-on experience in robotics and AI without the risks and costs of physical experiments.
=== Isaac Sim Integration with Isaac Gym ===
Note: Isaac Gym is now deprecated. NVIDIA now forwards users to their improved toolkit, Isaac Lab, also built on top of Isaac Sim.<br>
Isaac Sim works alongside Isaac Gym, NVIDIA’s tool for large-scale training with reinforcement learning. While Isaac Sim focuses on detailed simulations, Isaac Gym is great for efficient training. Together, they offer a comprehensive solution for developing and improving robotics applications.
===[https://github.com/haosulab/ManiSkill ManiSkill]===
===[[VSim]]===
=Training frameworks=
Popular training frameworks are listed here with example applications.
===[https://github.com/NVIDIA-Omniverse/IsaacGymEnvs Isaac Gym]===
Isaac Gym is NVIDIA's robotics simulation tool, part of the Isaac SDK. It leverages GPU acceleration to enable the simulation of thousands of robot bodies simultaneously, making it highly efficient for training machine learning models in robotics. It's designed to streamline robotics applications, focusing on reinforcement learning in a virtual environment.
===[https://gymnasium.farama.org/ Gymnasium]===
Gymnasium is an open-source toolkit for developing and comparing reinforcement learning algorithms. Originally developed by OpenAI as "Gym," it provides a standardized set of environments (like Atari games, robotic simulations, etc.) to test and benchmark AI algorithms. It's widely used in the AI research community to foster innovation and replication in RL studies.
===[[Applications]]===
Over the last decade several advancements have been made in learning locomotion and manipulation skills with simulations. See non-comprehensive list here.
== Training methods ==
===[[Imitation learning]]===
Imitation Learning is a technique where models learn to perform tasks by mimicking expert behaviors. This approach is often used when defining explicit reward functions is challenging. It accelerates learning by using pre-collected datasets of expert demonstrations, reducing the need for trial-and-error in initial learning phases.
===[[Reinforcement Learning]]===
Reinforcement Learning involves agents learning to make decisions by interacting with an environment to maximize cumulative rewards. It's foundational in fields where sequential decision-making is crucial, like gaming, autonomous vehicles, and robotics. RL uses methods like Q-learning and policy gradient to iteratively improve agent performance based on feedback from the environment.
[[Category: Software]]
01bc1ab39239c051fdbe88fa19931392ee4b481a
1608
1607
2024-06-28T19:03:16Z
EtcetFelix
38
/* Training frameworks */
wikitext
text/x-wiki
Learning algorithms allow to train humanoids to perform different skills such as manipulation or locomotion. Below is an overview of general approaches to training machine learning models for humanoid robots with example [[applications]]. Typically you need a simulator, training framework and machine learning method to train end to end behaviors.
== Physics engines ==
Physics engines are software libraries designed to simulate physical systems in a virtual environment. They are crucial in a variety of fields such as video games, animation, robotics, and engineering simulations. These engines handle the mathematics involved in simulating physical processes like motion, collisions, and fluid dynamics.
For a much more comprehensive overview see [https://simulately.wiki/docs/ Simulately].
===PhysX===
PhysX is a physics engine by NVIDIA used primarily for video games and real-time simulations. It supports rigid body dynamics, cloth simulation, and particle effects, enhancing realism and interactivity in 3D environments.
===MuJoCo===
MuJoCo (Multi-Joint dynamics with Contact) is a physics engine designed for research in robotics and biomechanics. It's known for its speed, accuracy, and ease of use, making it popular for simulating complex systems with robotics and articulated structures.
===Bullet===
Bullet is a physics engine supporting real-time collision detection and multi-physics simulation for VR, games, visual effects, robotics, machine learning
==Simulators==
===[[Isaac Sim]]===
Isaac Sim is NVIDIA’s simulation platform for robotics development. It’s part of their Isaac Robotics platform and uses advanced graphics and AI to create realistic simulations.
==== Isaac Sim Features ====
* '''Advanced Physics Simulation''': Includes PhysX and Flex for detailed simulations of physical interactions like rigid bodies, soft bodies, and fluids.
* '''Photorealistic Rendering''': Uses NVIDIA RTX technology to make environments and objects look incredibly realistic, which is great for tasks that need vision-based learning.
* '''Scalability''': Can simulate multiple robots and environments at the same time, thanks to GPU acceleration, making it handle complex simulations efficiently.
* '''Interoperability''': Works with machine learning frameworks like TensorFlow and PyTorch and supports ROS, so you can easily move from simulation to real-world deployment.
* '''Customizable Environments''': Lets you create and customize simulation environments, including importing 3D models and designing different terrains.
* '''Real-Time Feedback''': Provides real-time monitoring and analytics, giving you insights on how tasks are performing and resource usage.
==== Isaac Sim Applications ====
* '''Robotics Research''': Used in academia and industry to develop and test new algorithms for robot perception, control, and planning.
* '''Autonomous Navigation''': Helps simulate and test navigation algorithms for mobile robots and drones, improving path planning and obstacle avoidance.
* '''Manipulation Tasks''': Supports developing robotic skills like object grasping and assembly tasks, making robots more dexterous and precise.
* '''Industrial Automation''': Helps companies design and validate automation solutions for manufacturing and logistics, boosting efficiency and cutting down on downtime.
* '''Education and Training''': A great educational tool that offers hands-on experience in robotics and AI without the risks and costs of physical experiments.
=== Isaac Sim Integration with Isaac Gym ===
Note: Isaac Gym is now deprecated. NVIDIA now forwards users to their improved toolkit, Isaac Lab, also built on top of Isaac Sim.<br>
Isaac Sim works alongside Isaac Gym, NVIDIA’s tool for large-scale training with reinforcement learning. While Isaac Sim focuses on detailed simulations, Isaac Gym is great for efficient training. Together, they offer a comprehensive solution for developing and improving robotics applications.
===[https://github.com/haosulab/ManiSkill ManiSkill]===
===[[VSim]]===
=Training frameworks=
Popular training frameworks are listed here with example applications.
===[https://isaac-sim.github.io/IsaacLab/index.html Isaac Lab]===
Isaac Lab is NVIDIA's modular framework for robot learning that aims to simplify common workflows in robotics research, part of the Isaac SDK. It is the successor to Isaac Gym.
===[https://github.com/NVIDIA-Omniverse/IsaacGymEnvs Isaac Gym]===
Note: Isaac Gym is now deprecated in favor of Isaac Lab. Isaac Gym is NVIDIA's robotics simulation tool, part of the Isaac SDK. It leverages GPU acceleration to enable the simulation of thousands of robot bodies simultaneously, making it highly efficient for training machine learning models in robotics. It's designed to streamline robotics applications, focusing on reinforcement learning in a virtual environment.
===[https://gymnasium.farama.org/ Gymnasium]===
Gymnasium is an open-source toolkit for developing and comparing reinforcement learning algorithms. Originally developed by OpenAI as "Gym," it provides a standardized set of environments (like Atari games, robotic simulations, etc.) to test and benchmark AI algorithms. It's widely used in the AI research community to foster innovation and replication in RL studies.
===[[Applications]]===
Over the last decade several advancements have been made in learning locomotion and manipulation skills with simulations. See non-comprehensive list here.
== Training methods ==
===[[Imitation learning]]===
Imitation Learning is a technique where models learn to perform tasks by mimicking expert behaviors. This approach is often used when defining explicit reward functions is challenging. It accelerates learning by using pre-collected datasets of expert demonstrations, reducing the need for trial-and-error in initial learning phases.
===[[Reinforcement Learning]]===
Reinforcement Learning involves agents learning to make decisions by interacting with an environment to maximize cumulative rewards. It's foundational in fields where sequential decision-making is crucial, like gaming, autonomous vehicles, and robotics. RL uses methods like Q-learning and policy gradient to iteratively improve agent performance based on feedback from the environment.
[[Category: Software]]
eb58fb71cbd97c3413752b079537abb384e1915c
Reinforcement Learning
0
34
1609
1158
2024-06-28T19:17:22Z
EtcetFelix
38
/* Resources */
wikitext
text/x-wiki
This guide is incomplete and a work in progress; you can help by expanding it!
== Reinforcement Learning (RL) ==
Reinforcement Learning (RL) is a machine learning approach where an agent learns to perform tasks by interacting with an environment. It involves the agent receiving rewards or penalties based on its actions and using this feedback to improve its performance over time. RL is particularly useful in robotics for training robots to perform complex tasks autonomously. Here's how RL is applied in robotics, using simulation environments like Isaac Sim and MuJoCo:
== RL in Robotics ==
=== Practical Applications of RL ===
==== Task Automation ====
* Robots can be trained to perform repetitive or dangerous tasks autonomously, such as assembly line work, welding, or hazardous material handling.
* RL enables robots to adapt to new tasks without extensive reprogramming, making them versatile for various industrial applications.
==== Navigation and Manipulation ====
* RL is used to train robots for navigating complex environments and manipulating objects with precision, which is crucial for tasks like warehouse logistics, domestic chores, and medical surgeries.
=== Simulation Environments ===
==== Isaac Sim ====
* Isaac Sim provides a highly realistic and interactive environment where robots can be trained safely and efficiently.
* The simulated environment includes physics, sensors, and other elements that mimic real-world conditions, enabling the transfer of learned behaviors to physical robots.
==== MuJoCo ====
* MuJoCo (Multi-Joint dynamics with Contact) is a physics engine designed for research and development in robotics, machine learning, and biomechanics.
* It offers fast and accurate simulations, which are essential for training RL agents in tasks involving complex dynamics and contact-rich interactions.
== Training algorithms ==
* [https://en.wikipedia.org/wiki/Advantage_Actor_Critic A2C]
* [https://en.wikipedia.org/wiki/Proximal_policy_optimization PPO]
* [https://spinningup.openai.com/en/latest/algorithms/sac.html SAC]
== Resources ==
* [https://mandi-zhao.gitbook.io/deeprl-notes Mandy Zhao's Reinforcement Learning Notes]
* [https://cs224r.stanford.edu/slides/cs224r-actor-critic-split.pdf Stanford CS224R Actor Critic Slides]
* [https://farama.org/Announcing-The-Farama-Foundation RL Environment API by Farama Explanation]
[[Category: Software]]
[[Category: Reinforcement Learning]]
dbd22d009cabcb5a3f4b220187b7358e2fef9754
Main Page
0
1
1616
1589
2024-07-01T14:41:19Z
Ben
2
/* List of Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
=== Discord community ===
[https://discord.gg/k3y9ymM9 Discord]
b1a6a036f3445b279a66edebb322f6c5f3fdb040
1633
1616
2024-07-06T20:26:26Z
Ben
2
/* List of Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
=== Discord community ===
[https://discord.gg/k3y9ymM9 Discord]
f2c45f3ce63254c657df1bd2cb2ceeb4d4098018
1644
1633
2024-07-06T21:00:09Z
Ben
2
/* Discord community */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
=== Discord community ===
[https://discord.gg/rhCy6UdBRD Discord]
f20183aac92439a1e0b1f141e786023389e4d43c
K-Scale Weekly Progress Updates
0
294
1627
1594
2024-07-05T16:53:25Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Link
|-
| [https://x.com/kscalelabs/status/1809263616958374286 2024.07.05]
|-
| [https://x.com/kscalelabs/status/1804184936574030284 2024.06.21]
|-
| [https://x.com/kscalelabs/status/1801749382167204086 2024.06.14]
|-
| [https://x.com/kscalelabs/status/1799197382208590132 2024.06.07]
|-
| [https://x.com/kscalelabs/status/1796617681455775944 2024.05.31]
|-
| [https://x.com/kscalelabs/status/1794109131214712914 2024.05.24]
|-
| [https://x.com/kscalelabs/status/1791507358780461496 2024.05.17]
|-
| [https://x.com/kscalelabs/status/1788968705378181145 2024.05.10]
|}
[[Category:K-Scale]]
08e2f9e423b533cd542b47f0e73289956df33f60
PaXini
0
367
1634
2024-07-06T20:28:19Z
Ben
2
Created page with "PaXini is a Chinese startup building a wheeled humanoid robot called [[Tora]]. They are based in Shenzhen. {{infobox company | name = PaXini Technology | country = China | we..."
wikitext
text/x-wiki
PaXini is a Chinese startup building a wheeled humanoid robot called [[Tora]]. They are based in Shenzhen.
{{infobox company
| name = PaXini Technology
| country = China
| website_link = https://paxini.com/
| robots = [[Tora]]
}}
148bcb4680512f2e2dbc7345e6bce0edce972337
Tora
0
368
1635
2024-07-06T20:29:05Z
Ben
2
Created page with "Tora is a humanoid robot from the company [[PaXini]] {{infobox robot | name = Tora | organization = [[PaXini]] | cost = USD 20,000 | purchase_link = https://www.alibaba.com/p..."
wikitext
text/x-wiki
Tora is a humanoid robot from the company [[PaXini]]
{{infobox robot
| name = Tora
| organization = [[PaXini]]
| cost = USD 20,000
| purchase_link = https://www.alibaba.com/product-detail/PAXINI-Tactile-Humanoid-Robot-With-Tactile_1601047975093.html
}}
68384017e41070b2c4dc9bccaba9c584694f8a02
Cassie
0
24
1636
253
2024-07-06T20:30:05Z
Ben
2
wikitext
text/x-wiki
[[File:Cassie.jpg|right|200px|thumb]]
Cassie is a bipedal robot designed by Oregon State University and licensed and built by [[Agility]].
{{infobox robot
| name = Cassie
| organization = [[Agility]]
| video_link = https://www.youtube.com/watch?v=64hKiuJ31a4
| cost = USD 250,000
| height = 115 cm
| weight = 31 kg
| speed = >4 m/s (8.95 mph)
| battery_life = 5 hours
| battery_capacity = 1 kWh
| dof = 10 (5 per leg)
| number_made = ~12
| status = Retired
}}
== Development ==
On February 9, 2017 Cassie was unveiled at an event at Oregon State University featuring a live demo. A YouTube video was also posted to both the OSU YouTube page and Agility Robotics YouTube page.
On September 5th, 2017 University of Michigan received the first Cassie which they named "Cassie Blue." They would later receive "Cassie Yellow."
==Firsts and World Record==
Oregon State University's DRAIL lab used a reinforcement learned model to have Cassie run a 100M dash in 24.73 seconds. The Guinness World Record required a standing start and finish, so Cassie averaged a speed of over 4 m/s. For comparison, at the time the Guinness World Record for fastest running humanoid is ASIMO with a speed of 2.5 m/s which has to be done indoors on a special leveled floor. The run was done outside at the Whyte Track and Field Center on Oregon State's campus. The record was announced in September of 2022, but the actual run was 6 months prior. Cassie and the OSU DRAIL lab have an entire page dedicated to them in the 2024 Guinness World Records book. Footage of the run can be seen [https://www.youtube.com/watch?v=DdojWYOK0Nc here].
[[Category: Robots]]
1e7db9d6a525aeb38ccddf6d846ed5b4c3dc028c
1650
1636
2024-07-06T21:39:28Z
Ben
2
wikitext
text/x-wiki
[[File:Cassie.jpg|right|200px|thumb]]
Cassie is a bipedal robot designed by Oregon State University and licensed and built by [[Agility]].
<youtube>https://www.youtube.com/watch?v=64hKiuJ31a4</youtube>
{{infobox robot
| name = Cassie
| organization = [[Agility]]
| video_link = https://www.youtube.com/watch?v=64hKiuJ31a4
| cost = USD 250,000
| height = 115 cm
| weight = 31 kg
| speed = >4 m/s (8.95 mph)
| battery_life = 5 hours
| battery_capacity = 1 kWh
| dof = 10 (5 per leg)
| number_made = ~12
| status = Retired
}}
== Development ==
On February 9, 2017 Cassie was unveiled at an event at Oregon State University featuring a live demo. A YouTube video was also posted to both the OSU YouTube page and Agility Robotics YouTube page.
On September 5th, 2017 University of Michigan received the first Cassie which they named "Cassie Blue." They would later receive "Cassie Yellow."
==Firsts and World Record==
Oregon State University's DRAIL lab used a reinforcement learned model to have Cassie run a 100M dash in 24.73 seconds. The Guinness World Record required a standing start and finish, so Cassie averaged a speed of over 4 m/s. For comparison, at the time the Guinness World Record for fastest running humanoid is ASIMO with a speed of 2.5 m/s which has to be done indoors on a special leveled floor. The run was done outside at the Whyte Track and Field Center on Oregon State's campus. The record was announced in September of 2022, but the actual run was 6 months prior. Cassie and the OSU DRAIL lab have an entire page dedicated to them in the 2024 Guinness World Records book. Footage of the run can be seen [https://www.youtube.com/watch?v=DdojWYOK0Nc here].
[[Category: Robots]]
dc48d81db3806995deca7e583283a69c677204af
Underactuated Robotics
0
13
1637
885
2024-07-06T20:33:30Z
Ben
2
wikitext
text/x-wiki
This is a course taught by Russ Tedrake at MIT. It is available [https://underactuated.csail.mit.edu/ here].
[[Category:Courses]]
c5fce9af9e8669ceeaafd20ef1882dee30344564
Neo
0
55
1638
494
2024-07-06T20:52:50Z
Ben
2
wikitext
text/x-wiki
[[File:1x Neo.jpg|thumb]]
{{infobox robot
| name = NEO
| organization = [[1X]]
| height = 165 cm
| weight = 30 kg
| video_link = https://www.youtube.com/watch?v=ikg7xGxvFTs
| speed = 4 km/hr
| carry_capacity = 20 kg
| runtime = 2-4 hrs
}}
NEO is a bipedal humanoid robot developed by [[1X]]. It is designed to look and move like a human, featuring a head, torso, arms, and legs. NEO can perform a wide range of tasks, excelling in industrial sectors like security, logistics, manufacturing, operating machinery, and handling complex tasks. It is also envisioned to provide valuable home assistance and perform chores like cleaning or organizing.
NEO's soft, tendon based design is meant to have very low inertia, intended to work in close proximity to humans. It will weigh 30 kilograms, with a 20 kilogram carrying capacity. 1X hopes for Neo to be "an all-purpose android assistant to your daily life."
<youtube>https://www.youtube.com/watch?v=ikg7xGxvFTs</youtube>
[[Category:Robots]]
== References ==
* Ackerman, Evan (2024). "Humanoid Robots are Getting to Work.", ''IEEE Spectrum''.
740c1ae8087d01657ba0cb6e66b9c8652d97879e
1639
1638
2024-07-06T20:53:10Z
Ben
2
wikitext
text/x-wiki
[[File:1x Neo.jpg|thumb]]
<youtube>https://www.youtube.com/watch?v=ikg7xGxvFTs</youtube>
NEO is a bipedal humanoid robot developed by [[1X]]. It is designed to look and move like a human, featuring a head, torso, arms, and legs. NEO can perform a wide range of tasks, excelling in industrial sectors like security, logistics, manufacturing, operating machinery, and handling complex tasks. It is also envisioned to provide valuable home assistance and perform chores like cleaning or organizing.
{{infobox robot
| name = NEO
| organization = [[1X]]
| height = 165 cm
| weight = 30 kg
| video_link = https://www.youtube.com/watch?v=ikg7xGxvFTs
| speed = 4 km/hr
| carry_capacity = 20 kg
| runtime = 2-4 hrs
}}
NEO's soft, tendon based design is meant to have very low inertia, intended to work in close proximity to humans. It will weigh 30 kilograms, with a 20 kilogram carrying capacity. 1X hopes for Neo to be "an all-purpose android assistant to your daily life."
[[Category:Robots]]
== References ==
* Ackerman, Evan (2024). "Humanoid Robots are Getting to Work.", ''IEEE Spectrum''.
0b5e86ed46130dc85a95c95506702daed0384c7a
Eve
0
54
1640
451
2024-07-06T20:53:22Z
Ben
2
wikitext
text/x-wiki
EVE is a versatile and agile humanoid robot developed by [[1X]]. It is equipped with cameras and sensors to perceive and interact with its surroundings. EVE’s mobility, dexterity, and balance allow it to navigate complex environments and manipulate objects effective.
<youtube>https://www.youtube.com/watch?v=20GHG-R9eFI</youtube>
{{infobox robot
| name = EVE
| organization = [[1X]]
| height = 186 cm
| weight = 86 kg
| speed = 14.4 km/hr
| carry_capacity = 15 kg
| runtime = 6 hrs
| video_link = https://www.youtube.com/watch?v=20GHG-R9eFI
}}
[[Category:Robots]]
da8acccb10e80c932677af9a49d07480a3d4e254
ICub
0
286
1641
1297
2024-07-06T20:56:21Z
Ben
2
wikitext
text/x-wiki
The iCub is a research-grade humanoid robot for developing and testing embodied AI algorithms. The iCub Project integrates results from various [[Instituto Italiano]] Research Units. The iCub Project is a key initiative for IIT, aiming to transfer robotics technologies to industrial applications.
{{infobox robot
| name = iCub
| organization = [[Instituto Italiano]]
| height = 104 cm (3 ft 5 in)
| weight = 22 kg (48.5 lbs)
| video_link = https://www.youtube.com/watch?v=ErgfgF0uwUo
| cost = Approximately €250,000
}}
== General Specifications ==
The number of degrees of freedom is as follows:
{| class="wikitable"
! Component !! # of degrees of freedom !! Notes
|-
| Eyes || 3 || Independent vergence and common tilt
|-
| Head || 3 || The neck has three degrees of freedom to tilt, swing, and pan
|-
| Chest || 3 || The torso can also tilt, swing, and pan
|-
| Arms || 7 (each) || The shoulder has 3 DoF, 1 in the elbow, and three in the wrist
|-
| Hands || 9 || The hand has 19 joints coupled in various combinations: the thumb, index, and middle finger are independent (coupled distal phalanxes), the ring and little finger are coupled. The thumb can additionally rotate over the palm.
|-
| Legs || 6 (each) || 6 DoF are sufficient to walk.
|}
== Sensors ==
{| class="wikitable"
! Sensor type !! Number !! Notes
|-
| Cameras || 2 || Mounted in the eyes (see above), Pointgrey Dragonfly 2 (640x480)
|-
| Microphones || 2 || SoundMan High quality Stereo Omnidirectional microphone, -46 dB, 10V, 20....20 000 Hz +/- 3dB
|-
| Inertial sensors || 3+3 || Three axis gyroscopes + three axis accelerometers + three axis geomagnetic sensor based on BOSCH BNO055 chip, mounted in the head. (100Hz)
|-
| Joint sensors || For each large joint || Absolute magnetic encoder (12bit resolution @1kHz) at the joint, high-resolution incremental encoder at the motor side, hall-effect sensors for commutation (brushless motors only)
|-
| Joint sensors || For each small joint || Absolute magnetic encoder (except the fingers which use a custom hall-effect sensor), medium-resolution incremental encoder at the motor
|-
| Force/torque sensors || 6 || 6x6-axis force/torque sensors are mounted on the upper part of the arm and legs plus 2 additional sensors mounted closer to the ankle for higher precision ZMP estimation (100Hz)
|-
| Tactile sensors || More than 3000 (*) || Capacitive tactile sensors (8 bit resolution at 40Hz) are installed in the fingertips, palms, upper and fore-arms, chest and optionally at the legs (*).
|}
{| class="wikitable"
|+ Capabilities of iCub
! Task !! Description
|-
| Crawling || Using visual guidance with an optic marker on the floor
|-
| Solving complex 3D mazes || Demonstrated ability to navigate and solve intricate 3D mazes
|-
| Archery || Shooting arrows with a bow and learning to hit the center of the target
|-
| Facial expressions || Capable of expressing emotions through facial expressions
|-
| Force control || Utilizing proximal force/torque sensors for precise force control
|-
| Grasping small objects || Able to grasp and manipulate small objects such as balls and plastic bottles
|-
| Collision avoidance || Avoids collisions within non-static environments and can also avoid self-collision
|}
== Links ==
* [https://www.iit.it/research/lines/icub IIT official website on iCub]
* [https://www.youtube.com/watch?v=znF1-S9JmzI Presentation of iCub by IIT]
[[Category:Robots]]
[[Category:Humanoid Robots]]
a33e1758b65bfdc25d1b83f64bbd20cf364f564d
1642
1641
2024-07-06T20:57:14Z
Ben
2
wikitext
text/x-wiki
The iCub is a research-grade humanoid robot for developing and testing embodied AI algorithms. The iCub Project integrates results from various [[Instituto Italiano]] Research Units. The iCub Project is a key initiative for IIT, aiming to transfer robotics technologies to industrial applications.
{{infobox robot
| name = iCub
| organization = [[Instituto Italiano]]
| height = 104 cm (3 ft 5 in)
| weight = 22 kg (48.5 lbs)
| video_link = https://www.youtube.com/watch?v=ErgfgF0uwUo
| cost = Approximately €250,000
}}
== General Specifications ==
The number of degrees of freedom is as follows:
{| class="wikitable"
! Component !! # of degrees of freedom !! Notes
|-
| Eyes || 3 || Independent vergence and common tilt
|-
| Head || 3 || The neck has three degrees of freedom to tilt, swing, and pan
|-
| Chest || 3 || The torso can also tilt, swing, and pan
|-
| Arms || 7 (each) || The shoulder has 3 DoF, 1 in the elbow, and three in the wrist
|-
| Hands || 9 || The hand has 19 joints coupled in various combinations: the thumb, index, and middle finger are independent (coupled distal phalanxes), the ring and little finger are coupled. The thumb can additionally rotate over the palm.
|-
| Legs || 6 (each) || 6 DoF are sufficient to walk.
|}
== Sensors ==
{| class="wikitable"
! Sensor type !! Number !! Notes
|-
| Cameras || 2 || Mounted in the eyes (see above), Pointgrey Dragonfly 2 (640x480)
|-
| Microphones || 2 || SoundMan High quality Stereo Omnidirectional microphone, -46 dB, 10V, 20....20 000 Hz +/- 3dB
|-
| Inertial sensors || 3+3 || Three axis gyroscopes + three axis accelerometers + three axis geomagnetic sensor based on BOSCH BNO055 chip, mounted in the head. (100Hz)
|-
| Joint sensors || For each large joint || Absolute magnetic encoder (12bit resolution @1kHz) at the joint, high-resolution incremental encoder at the motor side, hall-effect sensors for commutation (brushless motors only)
|-
| Joint sensors || For each small joint || Absolute magnetic encoder (except the fingers which use a custom hall-effect sensor), medium-resolution incremental encoder at the motor
|-
| Force/torque sensors || 6 || 6x6-axis force/torque sensors are mounted on the upper part of the arm and legs plus 2 additional sensors mounted closer to the ankle for higher precision ZMP estimation (100Hz)
|-
| Tactile sensors || More than 3000 (*) || Capacitive tactile sensors (8 bit resolution at 40Hz) are installed in the fingertips, palms, upper and fore-arms, chest and optionally at the legs (*).
|}
{| class="wikitable"
|+ Capabilities of iCub
! Task !! Description
|-
| Crawling || Using visual guidance with an optic marker on the floor
|-
| Solving complex 3D mazes || Demonstrated ability to navigate and solve intricate 3D mazes
|-
| Archery || Shooting arrows with a bow and learning to hit the center of the target
|-
| Facial expressions || Capable of expressing emotions through facial expressions
|-
| Force control || Utilizing proximal force/torque sensors for precise force control
|-
| Grasping small objects || Able to grasp and manipulate small objects such as balls and plastic bottles
|-
| Collision avoidance || Avoids collisions within non-static environments and can also avoid self-collision
|}
== Links ==
* [https://www.iit.it/research/lines/icub IIT official website on iCub]
* [https://www.youtube.com/watch?v=znF1-S9JmzI Presentation of iCub by IIT]
{{#related:Instituto Italiano}]
[[Category:Robots]]
[[Category:Humanoid Robots]]
77442c64b572e511bbd6fc3e36c5f731336ca297
1643
1642
2024-07-06T20:57:36Z
Ben
2
/* Links */
wikitext
text/x-wiki
The iCub is a research-grade humanoid robot for developing and testing embodied AI algorithms. The iCub Project integrates results from various [[Instituto Italiano]] Research Units. The iCub Project is a key initiative for IIT, aiming to transfer robotics technologies to industrial applications.
{{infobox robot
| name = iCub
| organization = [[Instituto Italiano]]
| height = 104 cm (3 ft 5 in)
| weight = 22 kg (48.5 lbs)
| video_link = https://www.youtube.com/watch?v=ErgfgF0uwUo
| cost = Approximately €250,000
}}
== General Specifications ==
The number of degrees of freedom is as follows:
{| class="wikitable"
! Component !! # of degrees of freedom !! Notes
|-
| Eyes || 3 || Independent vergence and common tilt
|-
| Head || 3 || The neck has three degrees of freedom to tilt, swing, and pan
|-
| Chest || 3 || The torso can also tilt, swing, and pan
|-
| Arms || 7 (each) || The shoulder has 3 DoF, 1 in the elbow, and three in the wrist
|-
| Hands || 9 || The hand has 19 joints coupled in various combinations: the thumb, index, and middle finger are independent (coupled distal phalanxes), the ring and little finger are coupled. The thumb can additionally rotate over the palm.
|-
| Legs || 6 (each) || 6 DoF are sufficient to walk.
|}
== Sensors ==
{| class="wikitable"
! Sensor type !! Number !! Notes
|-
| Cameras || 2 || Mounted in the eyes (see above), Pointgrey Dragonfly 2 (640x480)
|-
| Microphones || 2 || SoundMan High quality Stereo Omnidirectional microphone, -46 dB, 10V, 20....20 000 Hz +/- 3dB
|-
| Inertial sensors || 3+3 || Three axis gyroscopes + three axis accelerometers + three axis geomagnetic sensor based on BOSCH BNO055 chip, mounted in the head. (100Hz)
|-
| Joint sensors || For each large joint || Absolute magnetic encoder (12bit resolution @1kHz) at the joint, high-resolution incremental encoder at the motor side, hall-effect sensors for commutation (brushless motors only)
|-
| Joint sensors || For each small joint || Absolute magnetic encoder (except the fingers which use a custom hall-effect sensor), medium-resolution incremental encoder at the motor
|-
| Force/torque sensors || 6 || 6x6-axis force/torque sensors are mounted on the upper part of the arm and legs plus 2 additional sensors mounted closer to the ankle for higher precision ZMP estimation (100Hz)
|-
| Tactile sensors || More than 3000 (*) || Capacitive tactile sensors (8 bit resolution at 40Hz) are installed in the fingertips, palms, upper and fore-arms, chest and optionally at the legs (*).
|}
{| class="wikitable"
|+ Capabilities of iCub
! Task !! Description
|-
| Crawling || Using visual guidance with an optic marker on the floor
|-
| Solving complex 3D mazes || Demonstrated ability to navigate and solve intricate 3D mazes
|-
| Archery || Shooting arrows with a bow and learning to hit the center of the target
|-
| Facial expressions || Capable of expressing emotions through facial expressions
|-
| Force control || Utilizing proximal force/torque sensors for precise force control
|-
| Grasping small objects || Able to grasp and manipulate small objects such as balls and plastic bottles
|-
| Collision avoidance || Avoids collisions within non-static environments and can also avoid self-collision
|}
== Links ==
* [https://www.iit.it/research/lines/icub IIT official website on iCub]
* [https://www.youtube.com/watch?v=znF1-S9JmzI Presentation of iCub by IIT]
[[Category:Robots]]
[[Category:Humanoid Robots]]
a33e1758b65bfdc25d1b83f64bbd20cf364f564d
1660
1643
2024-07-06T21:44:13Z
Ben
2
wikitext
text/x-wiki
The iCub is a research-grade humanoid robot for developing and testing embodied AI algorithms. The iCub Project integrates results from various [[Instituto Italiano]] Research Units. The iCub Project is a key initiative for IIT, aiming to transfer robotics technologies to industrial applications.
{{infobox robot
| name = iCub
| organization = [[Instituto Italiano]]
| height = 104 cm (3 ft 5 in)
| weight = 22 kg (48.5 lbs)
| video_link = https://www.youtube.com/watch?v=ErgfgF0uwUo
| cost = Approximately €250,000
}}
<youtube>https://www.youtube.com/watch?v=ErgfgF0uwUo</youtube>
== General Specifications ==
The number of degrees of freedom is as follows:
{| class="wikitable"
! Component !! # of degrees of freedom !! Notes
|-
| Eyes || 3 || Independent vergence and common tilt
|-
| Head || 3 || The neck has three degrees of freedom to tilt, swing, and pan
|-
| Chest || 3 || The torso can also tilt, swing, and pan
|-
| Arms || 7 (each) || The shoulder has 3 DoF, 1 in the elbow, and three in the wrist
|-
| Hands || 9 || The hand has 19 joints coupled in various combinations: the thumb, index, and middle finger are independent (coupled distal phalanxes), the ring and little finger are coupled. The thumb can additionally rotate over the palm.
|-
| Legs || 6 (each) || 6 DoF are sufficient to walk.
|}
== Sensors ==
{| class="wikitable"
! Sensor type !! Number !! Notes
|-
| Cameras || 2 || Mounted in the eyes (see above), Pointgrey Dragonfly 2 (640x480)
|-
| Microphones || 2 || SoundMan High quality Stereo Omnidirectional microphone, -46 dB, 10V, 20....20 000 Hz +/- 3dB
|-
| Inertial sensors || 3+3 || Three axis gyroscopes + three axis accelerometers + three axis geomagnetic sensor based on BOSCH BNO055 chip, mounted in the head. (100Hz)
|-
| Joint sensors || For each large joint || Absolute magnetic encoder (12bit resolution @1kHz) at the joint, high-resolution incremental encoder at the motor side, hall-effect sensors for commutation (brushless motors only)
|-
| Joint sensors || For each small joint || Absolute magnetic encoder (except the fingers which use a custom hall-effect sensor), medium-resolution incremental encoder at the motor
|-
| Force/torque sensors || 6 || 6x6-axis force/torque sensors are mounted on the upper part of the arm and legs plus 2 additional sensors mounted closer to the ankle for higher precision ZMP estimation (100Hz)
|-
| Tactile sensors || More than 3000 (*) || Capacitive tactile sensors (8 bit resolution at 40Hz) are installed in the fingertips, palms, upper and fore-arms, chest and optionally at the legs (*).
|}
{| class="wikitable"
|+ Capabilities of iCub
! Task !! Description
|-
| Crawling || Using visual guidance with an optic marker on the floor
|-
| Solving complex 3D mazes || Demonstrated ability to navigate and solve intricate 3D mazes
|-
| Archery || Shooting arrows with a bow and learning to hit the center of the target
|-
| Facial expressions || Capable of expressing emotions through facial expressions
|-
| Force control || Utilizing proximal force/torque sensors for precise force control
|-
| Grasping small objects || Able to grasp and manipulate small objects such as balls and plastic bottles
|-
| Collision avoidance || Avoids collisions within non-static environments and can also avoid self-collision
|}
== Links ==
* [https://www.iit.it/research/lines/icub IIT official website on iCub]
* [https://www.youtube.com/watch?v=znF1-S9JmzI Presentation of iCub by IIT]
[[Category:Robots]]
[[Category:Humanoid Robots]]
5a33ae5294017f6aa74d69fc79eb1f96c0c75191
Talk:Main Page
1
63
1645
266
2024-07-06T21:12:45Z
Ben
2
wikitext
text/x-wiki
You should install a YouTube plugin so we can embed robot demo videos! --[[User:Modeless|Modeless]] ([[User talk:Modeless|talk]]) 22:26, 24 April 2024 (UTC)
: Added :) --[[User:Ben|Ben]] ([[User talk:Ben|talk]]) 21:12, 6 July 2024 (UTC)
1608584b4fe83acb4b666140f57fc963ddb06b70
RAISE-A1
0
123
1646
467
2024-07-06T21:38:39Z
Ben
2
wikitext
text/x-wiki
RAISE-A1 is the first-generation general embodied intelligent robot developed by [[AGIBOT]]. The robot showcases industry-leading capabilities in bipedal walking intelligence and human-machine interaction, and is designed for use in various fields such as flexible manufacturing, interactive services, education and research, specialized avatars, warehousing logistics, and robotic household assistants.
<youtube>https://www.youtube.com/watch?v=PIYJtZmzs70</youtube>
{{infobox robot
| name = RAISE-A1
| organization = [[AGIBOT]]
| height = 175 cm
| weight = 55 kg
| single_arm_payload = 5 kg
| runtime = 5 Hrs
| walk_speed = 7 km/h
| video_link = https://www.youtube.com/watch?v=PIYJtZmzs70
| cost =
}}
[[Category:Robots]]
cd53f6a7c27f42f61db92278b4a958269743efb1
Valkyrie
0
117
1647
466
2024-07-06T21:38:59Z
Ben
2
wikitext
text/x-wiki
NASA’s Valkyrie, also known as R5, is a robust, rugged, and entirely electric humanoid robot. It was designed and built by the Johnson Space Center (JSC) Engineering Directorate to compete in the 2013 DARPA Robotics Challenge (DRC) Trials.
<youtube>https://www.youtube.com/watch?v=LaYlQYHXJio</youtube>
{{infobox robot
| name = Valkyrie
| organization = [[NASA]]
| height = 190 cm
| weight = 125 kg
| video_link = https://www.youtube.com/watch?v=LaYlQYHXJio
| cost =
}}
[[Category:Robots]]
d1bea38266c75dbb8b9394640a35a0d2d9aeaa22
Draco
0
129
1648
444
2024-07-06T21:39:08Z
Ben
2
wikitext
text/x-wiki
Draco is a high-performance bipedal platform developed by [[Apptronik]]. It’s their first biped robot, designed with a focus on speed and power. The system has 10 Degrees of Freedom (DOFs), allowing for a wide range of movements and tasks. One of the key features of Draco is its liquid cooling system, which helps maintain optimal performance during operation.
<youtube>https://www.youtube.com/watch?v=g9xt2zdSOo8</youtube>
{{infobox robot
| name = Draco
| organization = [[Apptronik]]
| height =
| weight =
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=g9xt2zdSOo8
| cost =
}}
[[Category:Robots]]
7ac132a836fe11b924e75182c532779c5903f761
Apollo
0
131
1649
448
2024-07-06T21:39:18Z
Ben
2
wikitext
text/x-wiki
The humanoid robot Apollo is a creation of [[Apptronik]], a company known for its advanced humanoid robots. Apollo is a practical bipedal platform that’s designed to perform useful tasks. It’s equipped with two NVIDIA Jetson units and has been trained in the Isaac platform’s simulation environment.
<youtube>https://www.youtube.com/watch?v=3CdwPGC9nyk&t=6s</youtube>
{{infobox robot
| name = Apollo
| organization = [[Apptronik]]
| height = 172.7 cm
| weight = 73 kg
| two_hand_payload = 25
| video_link = https://www.youtube.com/watch?v=3CdwPGC9nyk&t=6s
| cost =
}}
[[Category:Robots]]
86fb2f0143b306925bdc4e22cb83e606f5f47092
Digit
0
128
1651
518
2024-07-06T21:39:43Z
Ben
2
wikitext
text/x-wiki
[[File:Agility Robotics Digit.jpg|thumb]]
Digit is a humanoid robot developed by [[Agility]], designed to navigate our world and perform tasks like navigation, obstacle avoidance, and manipulation. It's equipped with a torso full of sensors and a pair of arms, and is considered the most advanced Mobile Manipulation Robot (MMR) on the market, capable of performing repetitive tasks in production environments without requiring significant infrastructure changes.
<youtube>https://www.youtube.com/watch?v=NgYo-Wd0E_U</youtube>
{{infobox robot
| name = Digit
| organization = [[Agility]]
| height = 175.3 cm
| weight = 65 kg
| two_hand_payload = 15.88
| runtime =
| walk_speed =
| video_link = https://www.youtube.com/watch?v=NgYo-Wd0E_U
| cost =
}}
Digit is notably designed to be bipedal, but not necessarily humanoid/anthropomorphic, with ostrich-like reverse jointed legs. This is a side effect of Agility's design goals, to maximize efficiency and robustness of legged locomotion.
== References ==
* Ackerman, Evan (2024). "Humanoid Robots are Getting to Work.", ''IEEE Spectrum''.
[[Category:Robots]]
e979751cc450d410d341c192c89fa7dec0ae2b3c
Astribot S1
0
151
1652
619
2024-07-06T21:40:21Z
Ben
2
wikitext
text/x-wiki
The Astribot S1 boasts an array of features that sets it apart from its predecessors. In its design, it intended to operate without the need for human guidance amidst its tasks, further emphasizing its autonomous ability<ref>https://www.geeky-gadgets.com/astribot-s1-ai-humanoid-robot/</ref>. The robot is known to assist in a variety of housework, including preparing drinks and doing other household chores such as ironing and folding<ref>https://www.msn.com/en-us/news/technology/astribot-ai-powered-humanoid-torso-can-prepare-drinks-help-with-housework/ar-AA1nP0ah</ref>.
{{infobox robot
| name = Astribot
| organization = AstriBot Corporation
| video_link =
| cost =
| height =
| weight =
| speed = 10 m/s
| lift_force = 10 kg
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof = 7 (per arm)
| status =
}}
<youtube>https://www.youtube.com/watch?v=AePEcHIIk9s</youtube>
Key specifications of the Astribot S1 include a top speed of 10 meters per second, which is faster than the average adult man, a payload capacity of 10 kg per arm and a total of 7 degrees of freedom per arm. These arms exhibit a range of motion rivaling that of a human limb<ref>https://digialps.com/meet-the-incredibly-fast-astribot-s1-the-humanoid-robot-that-learns-at-1x-speed-without-any-help/</ref>.
== Capabilities ==
Astribot S1 is described as an 'all-purpose' home robot. While the full extent of its capabilities are still being unveiled, it is said to have the potential for tasks like folding clothes, cooking meals, and cleaning rooms<ref>https://businessnewsforkids.substack.com/p/85-a-super-cool-home-robot</ref>. In addition to these home tasks, the Astribot S1 is also adept at performing intricate tasks such as opening bottles and pouring drinks, which clearly illustrates the robot's well-designed dexterity<ref>https://www.msn.com/en-us/news/technology/astribot-ai-powered-humanoid-torso-can-prepare-drinks-help-with-housework/ar-AA1nP0ah</ref>.
== References ==
<references />
[[Category:Robots]]
f932c3d3a04d0e06caf9cad659279645041aa6ed
Tiangong
0
150
1653
609
2024-07-06T21:41:05Z
Ben
2
wikitext
text/x-wiki
Sharing the name of the Chinese space station for some reason, this robot is claimed to be the "first" running electric humanoid. Unitree H1 already runs faster though.
<youtube>https://www.youtube.com/watch?v=DeQpz5ycInE</youtube>
== Links ==
* [https://www.maginative.com/article/meet-tiangong-chinas-full-size-electric-running-humanoid-robot/ Announcement article]
3f94a22ca7235e87e001a2eb53b172a1d172c703
Nadia
0
109
1654
600
2024-07-06T21:41:33Z
Ben
2
wikitext
text/x-wiki
Nadia is a humanoid robot developed by the Institute for Human & Machine Cognition (IHMC) in collaboration with Boardwalk Robotics<ref>[http://robots.ihmc.us/nadia Nadia Humanoid — IHMC Robotics Lab]</ref>. This advanced humanoid robot, named Nadia, is designed with a high power-to-weight ratio and a large range of motion, characteristics which provide it with exceptional mobility<ref>[https://www.boardwalkrobotics.com/Nadia.html Nadia Humanoid Robot - Boardwalk Robotics]</ref>. Though specific information regarding height, weight, and payload capacities have not been explicitly stated, Nadia reportedly has one of the highest ranges of motion across its 29 joints of any humanoid robot globally<ref>[https://www.boardwalkrobotics.com/Nadia.html Nadia Humanoid Robot - Boardwalk Robotics]</ref>.
{{infobox robot
| name = Nadia
| organization = [[IHMC, Boardwalk Robotics]]
| height =
| weight =
| single_hand_payload =
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=uTmUfOc7r_s
| cost =
}}
<youtube>https://www.youtube.com/watch?v=uTmUfOc7r_s</youtube>
== Design and Capabilities ==
The Nadia humanoid robot's design encompasses a high power-to-weight ratio, contributing to its significant mobility potential. It stands out due to its extensive range of motion, facilitated by its architecture of 29 joints<ref>[https://www.boardwalkrobotics.com/Nadia.html Nadia Humanoid Robot - Boardwalk Robotics]</ref>. These design features enable Nadia to adapt to and function within urban environments, aligning with the project's goal of facilitating semi-autonomous behaviors.
Built with the same intelligence that powers the IHMC's DRC-Atlas robot, Nadia boasts real-time perception, compliant locomotion, autonomous footstep placement, and dexterous VR-teleoperated manipulation<ref>[https://www.boardwalkrobotics.com/Nadia.html Nadia Humanoid Robot - Boardwalk Robotics]</ref>.
== Research and Development ==
The development of Nadia is a collaborative project by the IHMC Robotics Lab and Boardwalk Robotics. The research team aims to produce a next-generation humanoid, capable of executing more perilous tasks while retaining high mobility<ref>[https://www.bbc.com/news/world-us-canada-67722014 A VR-controlled robot that throws boxing punches - BBC]</ref>. This development project positions Nadia as one of the most mobile ground robots designed in-house at IHMC in nearly a decade<ref>[https://www.ihmc.us/news20221005/ Video shows progress of IHMC humanoid robot Nadia]</ref>.
== References ==
<references />
[[Category:Robots]]
aecf195316df94461500efd764a83537b447c923
HD Atlas
0
134
1655
462
2024-07-06T21:42:04Z
Ben
2
wikitext
text/x-wiki
HD Atlas is a highly dynamic humanoid robot developed by [[Boston Dynamics]]. It’s designed for real-world applications and is capable of demonstrating advanced athletics and agility.
{{infobox robot
| name = HD Atlas
| organization = [[Boston Dynamics]]
| height =
| weight =
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=-9EM5_VFlt8
| cost =
}}
<youtube>https://www.youtube.com/watch?v=-9EM5_VFlt8</youtube>
[[Category:Robots]]
9a6352c71df769baf1b47d9c1311cd81339bde64
Atlas
0
81
1656
330
2024-07-06T21:42:16Z
Ben
2
wikitext
text/x-wiki
Atlas is a humanoid robot from [[Boston Dynamics]].
{{infobox robot
| name = Atlas
| organization = [[Boston Dynamics]]
| height = 150 cm
| weight = 89 kg
| video_link = https://www.youtube.com/watch?v=29ECwExc-_M
| single_hand_payload
| two_hand_payload
| cost =
}}
<youtube>https://www.youtube.com/watch?v=29ECwExc-_M</youtube>
[[Category:Robots]]
a815cf9a82a31291e4cec67e1de3d4a8b28d239d
XR4
0
77
1657
460
2024-07-06T21:42:37Z
Ben
2
wikitext
text/x-wiki
The XR4, also known as one of the Seven Fairies named Xiaozi, is a humanoid bipedal robot developed by [[Dataa Robotics]]. XR4 is made from lightweight high-strength carbon fiber composite material with over 60 intelligent flexible joints.
{{infobox robot
| name = XR4
| organization = [[DATAA Robotics]]
| height = 168 cm
| weight = 65 kg
| video_link = https://www.youtube.com/watch?v=DUyUZcH5uUU
| single_hand_payload
| two_hand_payload
| cost =
}}
<youtube>https://www.youtube.com/watch?v=DUyUZcH5uUU</youtube>
[[Category:Robots]]
ebdbae7d2c1f52b93282d41e3f687fc77a8d55e6
Wukong-IV
0
75
1658
574
2024-07-06T21:42:52Z
Ben
2
wikitext
text/x-wiki
The Wukong-IV is an adult-size humanoid robot designed and built by the research team at [[Deep Robotics]]. It stands 1.4 meters tall and weighs 45 kg<ref>https://pdfs.semanticscholar.org/f4d3/80f8e0fe39906f21f5270ffd2bf7bae74039.pdf</ref>. This bionic humanoid robot is actuated by 21 electric motor joints. It has 6 degrees of freedom (DoF) on each leg and 4 DoFs on each arm<ref>https://www.mdpi.com/2072-666X/13/10/1688</ref>.
{{infobox robot
| name = Wukong-IV
| organization = [[Deep Robotics]]
| height = 140 cm
| weight = 45 kg
| single_hand_payload =
| two_hand_payload =
| cost =
| video_link = https://www.youtube.com/watch?v=fbk4fYc6U14
| dof = 21
| number_made =
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| status =
| speed =
}}
<youtube>https://www.youtube.com/watch?v=fbk4fYc6U14</youtube>
== References ==
<references />
[[Category:Robots]]
0f391d5c3658b6366b7b2e7a37198bd69535db99
MagicBot
0
103
1659
409
2024-07-06T21:43:08Z
Ben
2
wikitext
text/x-wiki
MagicBot is a humanoid robot from [[MagicLab, DREAME]].
{{infobox robot
| name = MagicBot
| organization = [[MagicLab, DREAME]]
| height = 178 cm
| weight = 56 kg
| single_hand_payload
| two_hand_payload
| video_link = https://www.youtube.com/watch?v=NTPmiDrHv4E&t=2s
| cost
}}
<youtube>https://www.youtube.com/watch?v=NTPmiDrHv4E&t=2s</youtube>
[[Category:Robots]]
898e7c7e1a2374fc1132c645e34ad451c7c42f44
Kaleido
0
94
1661
397
2024-07-06T21:44:28Z
Ben
2
wikitext
text/x-wiki
Kaleido is a humanoid robot from [[Kawasaki Robotics]].
{{infobox robot
| name = Kaleido
| organization = [[Kawasaki Robotics]]
| height = 179 cm
| weight = 83 kg
| two_hand_payload = 60
| video_link = https://www.youtube.com/watch?v=_h66xSbIEdU
}}
<youtube>https://www.youtube.com/watch?v=_h66xSbIEdU</youtube>
[[Category:Robots]]
913fba144fa1aace4e3520635b0f0c8c0a96bc58
Friends
0
96
1662
399
2024-07-06T21:44:41Z
Ben
2
wikitext
text/x-wiki
Friends is a humanoid robot from [[Kawasaki Robotics]].
{{infobox robot
| name = Friends
| organization = [[Kawasaki Robotics]]
| height = 168 cm
| weight = 54 kg
| two_hand_payload = 10
| video_link = https://www.youtube.com/watch?v=dz4YLbgbVvc
}}
<youtube>https://www.youtube.com/watch?v=dz4YLbgbVvc</youtube>
[[Category:Robots]]
ebb5b61a346601caf191af85d9cc4c69e60816bc
K1
0
92
1663
395
2024-07-06T21:45:06Z
Ben
2
wikitext
text/x-wiki
K1 is a humanoid robot from [[Kepler]].
{{infobox robot
| name = K1
| organization = [[Kepler]]
| height = 178 cm
| weight = 85 kg
| video_link = https://www.youtube.com/watch?v=A5vshTgDbKE
}}
<youtube>https://www.youtube.com/watch?v=A5vshTgDbKE</youtube>
[[Category:Robots]]
141efeba859c3efb1623feb514a8839555bc67ab
Stompy
0
2
1664
1459
2024-07-06T21:45:25Z
Ben
2
wikitext
text/x-wiki
[[File:Stompy.jpg|right|300px|thumb|Stompy standing up]]
{{infobox robot
| name = Stompy
| organization = [[K-Scale Labs]]
| cost = USD 10,000
}}
Stompy is an open-source humanoid robot developed by [[K-Scale Labs]]. Here are some relevant links:
* [[Stompy To-Do List]]
* [[Stompy Build Guide]]
* [[Gripper History]]
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
=== Conventions ===
The images below show our pin convention for the CAN bus when using various connectors.
<gallery>
Kscale db9 can bus convention.jpg
Kscale phoenix can bus convention.jpg
</gallery>
= Simulation =
For the latest simulation artifacts, see [https://kscale.dev/ the website].
= Artwork =
Here's some art of Stompy!
<gallery>
Stompy 1.png
Stompy 2.png
Stompy 3.png
Stompy 4.png
</gallery>
[[Category:Robots]]
[[Category:Open Source]]
[[Category:K-Scale]]
70fe138ddcd37045d7a3e6bd2586109620a79b71
Mona
0
107
1665
413
2024-07-06T21:45:47Z
Ben
2
wikitext
text/x-wiki
Mona is a humanoid robot from [[Kind Humanoid]].
{{infobox robot
| name = Mona
| organization = [[Kind Humanoid]]
| height =
| weight =
| single_hand_payload
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=LsKOpooGK4Q
}}
<youtube>https://www.youtube.com/watch?v=LsKOpooGK4Q</youtube>
[[Category:Robots]]
49716b38951bd5e434cdcdc762250bec3f0b6b20
HECTOR V2
0
215
1666
932
2024-07-06T21:46:29Z
Ben
2
wikitext
text/x-wiki
Open-source humanoid robot from USC.<ref>https://github.com/DRCL-USC/Hector_Simulation</ref><ref>https://laser-robotics.com/hector-v2/</ref>
<youtube>https://www.youtube.com/watch?v=W4f0641Kcpg</youtube>
=== References ===
<references/>
1c3b0da779df3d7f393efc04425313cc8148326c
Reachy
0
318
1667
1458
2024-07-06T21:47:11Z
Ben
2
wikitext
text/x-wiki
Reachy is a humanoid robot developed by [[Pollen Robotics]].
{{infobox robot
| name = Reachy
| organization = [[Pollen Robotics]]
| video_link = https://www.youtube.com/watch?v=oZxHkp4-DnM
}}
<youtube>https://www.youtube.com/watch?v=oZxHkp4-DnM</youtube>
[[Category:Robots]]
890f330121ebc30e69e232404b147248c3a1c7cc
HUBO
0
83
1668
333
2024-07-06T21:47:21Z
Ben
2
wikitext
text/x-wiki
HUBO is a humanoid robot from [[Rainbow Robotics]].
{{infobox robot
| name = HUBO
| organization = [[Rainbow Robotics]]
| height = 170 cm
| weight = 80 kg
| video_link = https://www.youtube.com/watch?v=r2pKEVTddy4
| single_hand_payload
| two_hand_payload
| cost = USD 320,000
}}
<youtube>https://www.youtube.com/watch?v=r2pKEVTddy4</youtube>
[[Category:Robots]]
b739f736e8aed14e4a05d8b544c9860ab5fd6a3b
1669
1668
2024-07-06T21:47:59Z
Ben
2
wikitext
text/x-wiki
HUBO is a humanoid robot from [[Rainbow Robotics]].
{{infobox robot
| name = HUBO
| organization = [[Rainbow Robotics]]
| height = 170 cm
| weight = 80 kg
| video_link = https://www.youtube.com/watch?v=r2pKEVTddy4
| single_hand_payload
| two_hand_payload
| cost = USD 320,000
}}
[[Category:Robots]]
2ef8e6ee8ee0469684e4f490089a879a8fbd6535
XBot
0
132
1670
458
2024-07-06T21:48:12Z
Ben
2
wikitext
text/x-wiki
XBot is a humanoid robot developed by [[Robot Era]], a startup incubated by Tsinghua University. The company has open-sourced its reinforcement learning framework, Humanoid-Gym, which was used to train the XBot and has proven successful in sim-to-real policy transfer.
{{infobox robot
| name = Xbot
| organization = [[Robot Era]]
| height = 122 cm
| weight = 38 kg
| two_hand_payload = 25
| video_link = https://www.youtube.com/watch?v=4tiVkZBw188
}}
<youtube>https://www.youtube.com/watch?v=4tiVkZBw188</youtube>
[[Category:Robots]]
fd288868e509d5fd61a469daddcac49c3d442bd1
Phoenix
0
53
1671
535
2024-07-06T21:48:23Z
Ben
2
wikitext
text/x-wiki
Phoenix is a sophisticated humanoid robot developed by [[Sanctuary AI]], a prominent company known for its advancements in robotics and artificial intelligence.
{{infobox robot
| name = Phoenix
| organization = Sanctuary AI
| video_link = https://youtube.com/watch?v=FH3zbUSMAAU
| height = 5 ft 7 in (170 cm)
| weight = 70 kg (155 lbs)
| two_hand_payload = 25 kg
}}
<youtube>https://youtube.com/watch?v=FH3zbUSMAAU</youtube>
=== Development and Capabilities ===
Phoenix, introduced on May 16, 2024, represents the seventh generation of humanoid robots aimed at performing general-purpose tasks in various industries, including but not limited to service, healthcare, and logistics. This robot was designed to mimic human dexterity and mobility, allowing it to operate in environments built for human functionality.
=== Major Features ===
* '''Height and Weight''': Phoenix stands at a height of 170 cm and weighs approximately 70 kilograms, which is within the range of an average adult human. This anthropomorphic design facilitates easier integration into human-centric environments.
* '''Two-Hand Payload''': The robot has a two-hand payload capacity of 25 kilograms, making it capable of handling substantial weight, which is essential for tasks involving lifting and carrying objects.
=== Public Release ===
Sanctuary AI publicly unveiled Phoenix on May 16, 2024, through a comprehensive announcement that highlighted the robot's potential applications and its contribution to advancing human-robot collaboration. The detailed introduction and capabilities were featured in a press release and a demonstration video, which showcased Phoenix performing a variety of tasks.<ref>Sanctuary AI News Release, ''Sanctuary AI Unveils Phoenix, a Humanoid General-Purpose Robot Designed for Work'', [https://sanctuary.ai/resources/news/sanctuary-ai-unveils-phoenix-a-humanoid-general-purpose-robot-designed-for-work/]</ref>
[[File:Main-image-phoenix-annoucement.jpg|none|500px|Phoenix Gen 7|thumb]]
== References ==
<references />
[[Category:Robots]]
12143c135a9c0358c115ff6bcce2bf9008883b14
Pepper
0
243
1672
1030
2024-07-06T21:48:31Z
Ben
2
wikitext
text/x-wiki
Pepper is a humanoid robot developed by [[Softbank Robotics]], a division of SoftBank Group. Pepper is designed to interact with humans and is used in various customer service and retail environments.
{{infobox robot
| name = Pepper
| organization = [[Softbank Robotics]]
| height = 121 cm (4 ft)
| weight = 28 kg (62 lbs)
| video_link = https://www.youtube.com/watch?v=kr05reBxVRs
| cost = Approximately $1,800
}}
<youtube>https://www.youtube.com/watch?v=kr05reBxVRs</youtube>
Pepper was introduced by SoftBank Robotics in June 2014. It is designed to understand and respond to human emotions, making it suitable for roles in customer service, retail, and healthcare.
== References ==
[https://www.softbankrobotics.com/ SoftBank Robotics official website]
[https://www.youtube.com/watch?v=2GhUd0OJdJw Presentation of Pepper by SoftBank Robotics]
[[Category:Robots]]
[[Category:Humanoid Robots]]
[[Category:SoftBank Robotics]]
94ee68610b5af1acf49c917c0c3cb4799d506d1e
NAO
0
244
1673
1031
2024-07-06T21:48:40Z
Ben
2
wikitext
text/x-wiki
NAO is a humanoid robot developed by [[Softbank Robotics]], a division of SoftBank Group. NAO is widely used in education, research, and healthcare for its advanced interactive capabilities.
{{infobox robot
| name = NAO
| organization = [[Softbank Robotics]]
| height = 58 cm (1 ft 11 in)
| weight = 5.4 kg (11.9 lbs)
| video_link = https://www.youtube.com/watch?v=nNbj2G3GmAo
| cost = Approximately $8,000
}}
<youtube>https://www.youtube.com/watch?v=nNbj2G3GmAo</youtube>
NAO was first introduced in 2006 by Aldebaran Robotics, which was later acquired by SoftBank Robotics. NAO has undergone several upgrades, becoming one of the most popular robots used for educational and research purposes.
== References ==
[https://www.softbankrobotics.com/ SoftBank Robotics official website]
[https://www.youtube.com/watch?v=nNbj2G3GmAo Presentation of NAO by SoftBank Robotics]
[[Category:Robots]]
[[Category:Humanoid Robots]]
[[Category:SoftBank Robotics]]
6c56dacb5c487715ed4db2fe381041e2042b5a04
Rocky
0
114
1674
424
2024-07-06T21:49:06Z
Ben
2
wikitext
text/x-wiki
Rocky is [[SuperDroid Robots]] most versatile robot platform which can move on any terrain, climb stairs, and step over obstacles. We are planning to open-source the humanoid once it is publicly walking and interacting with physical objects.
{{infobox robot
| name = Rocky
| organization = [[SuperDroid Robots]]
| video_link = https://twitter.com/stevenuecke/status/1707899032973033690
| cost = $75,000
| height = 64 in
| weight = 120 pounds
| speed =
| lift_force = 150 lbs
| battery_life = 8 hours
| battery_capacity = 3,600 Wh
| purchase_link = https://www.superdroidrobots.com/humanoid-biped-robot/
| number_made = 1
| dof = 22
| status = Finishing sim-2-real for motors
}}
<youtube>https://www.youtube.com/watch?v=MvAS4AsMvCI</youtube>
[[Category:Robots]]
0de569b82f8f5221974b39eb0feecffd7c24909b
ZEUS2Q
0
79
1675
461
2024-07-06T21:50:03Z
Ben
2
wikitext
text/x-wiki
ZEUS2Q is a humanoid robot developed by [[System Technology Works]]. It’s a stand-alone system that harnesses the power of Edge AI computing, enabling it to perform localized AI tasks like communication, facial, and object recognition on the edge.
{{infobox robot
| name = ZEUS2Q
| organization = [[System Technology Works]]
| height = 127 cm
| weight = 13.61 kg
| video_link = https://www.youtube.com/watch?v=eR2HMykMITY
}}
<youtube>https://www.youtube.com/watch?v=eR2HMykMITY</youtube>
[[Category:Robots]]
ea79f0125e895cd77531bd3d15a8ba8c6f2d656f
Optimus
0
22
1676
622
2024-07-06T21:50:18Z
Ben
2
wikitext
text/x-wiki
[[File:Optimus Tesla (1).jpg|right|200px|thumb]]
Optimus is a humanoid robot developed by [[Tesla]], an American electric vehicle and clean energy company. Also known as Tesla Bot, Optimus is a key component of Tesla's expansion into automation and artificial intelligence technologies.
{{infobox robot
| name = Optimus
| organization = [[Tesla]]
| height = 5 ft 8 in (173 cm)
| weight = 58 kg
| video_link = https://www.youtube.com/watch?v=cpraXaw7dyc
| cost = Unknown, rumored $20k
}}
<youtube>https://www.youtube.com/watch?v=cpraXaw7dyc</youtube>
== Development ==
Tesla initiated the development of the Optimus robot in 2021, with the goal of creating a multipurpose utility robot capable of performing unsafe, repetitive, or boring tasks primarily intended for a factory setting. Tesla's CEO, Elon Musk, outlined that Optimus could potentially transition into performing tasks in domestic environments in the future.
== Design ==
The robot stands at a height of 5 feet 8 inches and weighs approximately 58 kilograms. Its design focusses on replacing human labor in hazardous environments, incorporating advanced sensors and algorithms to navigate complex workspaces safely.
== Features ==
The features of Optimus are built around its capability to handle tools, carry out tasks requiring fine motor skills, and interact safely with human environments. The robot is equipped with Tesla's proprietary Full Self-Driving (FSD) computer, allowing it to understand and navigate real-world scenarios effectively.
== Impact ==
Optimus has significant potential implications for labor markets, particularly in industries reliant on manual labor. Its development also sparks discussions on ethics and the future role of robotics in society.
== References ==
* [https://www.tesla.com Tesla official website]
* [https://www.youtube.com/watch?v=cpraXaw7dyc Presentation of Optimus by Tesla]
[[Category:Robots]]
fd381a2e80b1a65407c54b67427cf078de013405
Punyo
0
137
1677
601
2024-07-06T21:50:28Z
Ben
2
wikitext
text/x-wiki
I wasn't able to find the specific height, weight, or cost of the Punyo robot. However, I was able to find general information about the robot, which employed whole-body manipulation, using its arms and chest, to assist with everyday tasks<ref>https://newatlas.com/robotics/toyota-punyo-humanoid-soft-robot/</ref><ref>https://punyo.tech/</ref>.
The robot platform is constructed with compliant materials and tactile mechanisms that increase its ability to safely handle objects, especially large and heavy ones<ref>https://medium.com/toyotaresearch/meet-punyo-tris-soft-robot-for-whole-body-manipulation-research-949c934ac3d8</ref><ref>https://spectrum.ieee.org/humanoid-robot-tri-punyo</ref>.
Here is the updated information for the Punyo robot:
{{infobox robot
| name = Punyo
| organization = [[Toyota Research Institute]]
| height =
| weight =
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=FY-MD4gteeE
}}
<youtube>https://www.youtube.com/watch?v=FY-MD4gteeE</youtube>
== Article ==
Punyo is a soft robot developed by the [[Toyota Research Institute]] (TRI) to revolutionize whole-body manipulation research. Unlike traditional robots that primarily use hands for manipulation, Punyo employs its arms and chest. The robot is designed to help with everyday tasks, such as lifting heavy objects or closing a drawer<ref>https://newatlas.com/robotics/toyota-punyo-humanoid-soft-robot/</ref><ref>https://punyo.tech/</ref>.
== Description ==
Punyo's platform designs with compliant materials and employs tactile mechanisms that increase its ability to handle objects effectively and safely, especially large and heavy ones<ref>https://medium.com/toyotaresearch/meet-punyo-tris-soft-robot-for-whole-body-manipulation-research-949c934ac3d8</ref><ref>https://spectrum.ieee.org/humanoid-robot-tri-punyo</ref>.
== References ==
<references />
[[Category:Robots]]
f53dab9a8c7600806de6e7fe32bd5b5161d51743
T-HR3
0
135
1678
475
2024-07-06T21:50:41Z
Ben
2
wikitext
text/x-wiki
The T-HR3 is the third generation humanoid robot unveiled by [[Toyota Research Institute]]. It’s designed to explore new technologies for safely managing physical interactions between robots and their surroundings, and it features a remote maneuvering system that mirrors user movements to the robot. The T-HR3 can assist humans in various settings such as homes, medical facilities, construction sites, disaster-stricken areas, and even outer space.
{{infobox robot
| name = T-HR3
| organization = [[Toyota Research Institute]]
| height = 150 cm
| weight = 75 kg
| two_hand_payload =
| video_link = https://www.youtube.com/watch?v=5dPY7l7u_z0
}}
<youtube>https://www.youtube.com/watch?v=5dPY7l7u_z0</youtube>
[[Category:Robots]]
af386d373c3c159b3929e9834b04b6fe8e9ab226
Walker X
0
71
1679
454
2024-07-06T21:50:50Z
Ben
2
wikitext
text/x-wiki
Walker X is a highly advanced AI humanoid robot developed by [[UBTECH]]. It incorporates six cutting-edge AI technologies, including upgraded vision-based navigation and hand-eye coordination, enabling it to move smoothly and quickly, and engage in precise and safe interactions. it is equipped with 41 high-performance servo joints, a 160° face surrounding 4.6K HD dual flexible curved screen, and a 4-dimensional light language system.
{{infobox robot
| name = Walker X
| organization = [[UBTech]]
| height = 130 cm
| weight = 63 kg
| single_hand_payload = 1.5
| two_hand_payload = 3
| cost = USD 960,000
| video_link = https://www.youtube.com/watch?v=4ZL3LgdKNbw
}}
<youtube>https://www.youtube.com/watch?v=4ZL3LgdKNbw</youtube>
[[Category:Robots]]
b1151429ed4406064a83362e9a372e0fd5a9601a
Walker S
0
74
1680
456
2024-07-06T21:51:01Z
Ben
2
wikitext
text/x-wiki
Walker S by [[UBTech]] is a highly advanced humanoid robot designed to serve in household and office scenarios. It is equipped with 36 high-performance servo joints and a full range of sensory systems including force, vision, hearing, and spatial awareness, enabling smooth and fast walking and flexible, precise handling.
{{infobox robot
| name = Walker S
| organization = [[UBTech]]
| height =
| weight =
| video_link = https://www.youtube.com/watch?v=UCt7qPpTt-g
| single_hand_payload =
| two_hand_payload =
| cost =
}}
<youtube>https://www.youtube.com/watch?v=UCt7qPpTt-g</youtube>
[[Category:Robots]]
ff95dcceac1127abc162585795db721c26df8643
H1
0
3
1681
612
2024-07-06T21:51:17Z
Ben
2
wikitext
text/x-wiki
'''Unitree H1''' is a full-size universal humanoid robot developed by the [[Unitree]], a company known for its innovative robotic designs. The H1 is celebrated for its superior power performance capabilities and advanced powertrain technologies.
{{infobox robot
| name = H1
| organization = [[Unitree]]
| video_link = https://www.youtube.com/watch?v=83ShvgtyFAg
| cost = USD 150,000
| height = 180 cm
| weight = 47 kg
| speed = >3.3 m/s
| lift_force =
| battery_life =
| battery_capacity = 864 Wh
| purchase_link = https://shop.unitree.com/products/unitree-h1
| number_made =
| dof =
| status =
}}
<youtube>https://www.youtube.com/watch?v=83ShvgtyFAg</youtube>
== Specifications ==
The H1 robot stands approximately 180 cm tall and weighs around 47 kg, offering high mobility and physical capabilities. Some of the standout specifications of the H1 include:
* Maximum speed: Exceeds 3.3 meters per second, a benchmark in robot mobility.
* Weight: Approximately 47 kg.
* Maximum joint torque: 360 N.m.
* Battery capacity: 864 Wh, which is quickly replaceable, enhancing the robot's operational endurance.
== Features ==
The H1 incorporates advanced technologies to achieve its high functionality:
* Highly efficient powertrain for superior speed, power, and maneuverability.
* Equipped with high-torque joint motors developed by [[Unitree]] itself.
* 360° depth sensing capabilities combined with LIDAR and depth cameras for robust environmental perception.
== Uses and Applications ==
While detailed use cases of Unitree H1 are not extensively documented, the robot's build and capabilities suggest its suitability in complicated tasks requiring human-like dexterity and strength which might include industrial applications, complex terrain navigation, and interactive tasks.
[[Category:Robots]]
4b761c614db4f672f92bb904f8d61e017c3c91a0
G1
0
233
1682
1022
2024-07-06T21:51:39Z
Ben
2
wikitext
text/x-wiki
[[File:Unitree g1.png|thumb]]
The G1 humanoid robot is an upcoming humanoid robot from [[Unitree]].
{{infobox robot
| name = G1
| organization = [[Unitree]]
| video_link = https://mp.weixin.qq.com/s/RGNVRazZqDn3y_Ijemc5Kw
| cost = 16000 USD
| height = 127 cm
| weight = 35 kg
| speed =
| lift_force =
| battery_life =
| battery_capacity = 9000 mAh
| purchase_link =
| number_made =
| dof = 23
| status = Preorders
}}
{{infobox robot
| name = G1 Edu Standard
| organization = [[Unitree]]
| cost = 31900 USD
| notes = Improved torque, warranty
}}
{{infobox robot
| name = G1 Edu Plus
| organization = [[Unitree]]
| cost = 34900 USD
| notes = Docking station
}}
{{infobox robot
| name = G1 Edu Smart
| organization = [[Unitree]]
| cost = 43900 USD
| notes = 3 waist DoFs instead of 1, more arm DoFs
}}
{{infobox robot
| name = G1 Edu Ultimate
| organization = [[Unitree]]
| cost = 53900 USD
| notes = Comes with force-controlled 3-finger dexterous hands
}}
<youtube>https://www.youtube.com/watch?v=GzX1qOIO1bE</youtube>
11e3ef845d8fe70dc02103f78f910840a1c90b84
THEMIS
0
115
1683
606
2024-07-06T21:51:58Z
Ben
2
wikitext
text/x-wiki
While the search results have shown information about a robot named THEMIS, it seems to be about the THeMIS unmanned ground vehicle (UGV) developed by the company Milrem Robotics, as seen on the Milrem Robotics website<ref>[Milrem Robotics Defence](https://milremrobotics.com/defence/)</ref>. The search results do not provide additional information about the THEMIS humanoid robot from Westwood Robotics.
Based on the initial information provided, we can update the infobox as follows (note that the cost field remains unfilled):
{{infobox robot
| name = THEMIS
| organization = [[Westwood Robotics]]
| video_link = https://www.youtube.com/watch?v=yt4mHwAl9cc
| height = 142.2 cm
| weight = 39 kg
| cost =
| speed =
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof =
| status =
}}
<youtube>https://www.youtube.com/watch?v=yt4mHwAl9cc</youtube>
Further details about the robot's costs, speed, lift force, number made, degrees of freedom, battery life/capacity, and status are needed to complete the infobox. Until we are able to elaborate more on the THEMIS humanoid robot, we cannot finalize this article.
== References ==
<references />
[[Category:Robots]]
5b1c744201b65b73242e0a658f92ea4ed67bd698
WorkFar Syntro
0
298
1684
1357
2024-07-06T21:52:10Z
Ben
2
wikitext
text/x-wiki
The '''Syntro''' robot from [[WorkFar]] is designed for advanced warehouse automation.
{{infobox robot
| name = Syntro
| organization = [[WorkFar]]
| video_link = https://www.youtube.com/watch?v=suF7mEtLJvY
| cost =
| height =
| weight =
| speed =
| lift_force =
| battery_life =
| battery_capacity =
| purchase_link =
| number_made =
| dof =
| status =
}}
<youtube>https://www.youtube.com/watch?v=suF7mEtLJvY</youtube>
[[Category:Robots]]
517569dcd2fe44f6d116951259ee63aa30d1b7ae
CyberOne
0
126
1685
623
2024-07-06T21:52:17Z
Ben
2
wikitext
text/x-wiki
CyberOne is a humanoid robot developed by the Chinese consumer electronics giant, Xiaomi. Unveiled in 2022 at a company event in Beijing by the founder, chairman, and CEO, Lei Jun, it is the newest member of Xiaomi's Cyber series, joining previously launched quadruped robots like CyberDog and CyberDog 2<ref>https://robotsguide.com/robots/cyberone</ref>.
{{infobox robot
| name = CyberOne
| organization = [[Xiaomi]]
| height = 177 cm
| weight = 52 kg
| single_arm_payload = 1.5
| runtime =
| walk_speed = 3.6 km/h
| video_link = https://www.youtube.com/watch?v=yBmatGQ0giY
| cost = 600,000 - 700,000 yuan (est.)
}}
<youtube>https://www.youtube.com/watch?v=yBmatGQ0giY</youtube>
== Specifications ==
This bipedal humanoid robot has a height of 177 cm and weight of 52 kg, with an arm span of 168 cm<ref>https://www.gadgets360.com/smart-home/news/xiaomi-cyberone-unveiled-specifications-humanoid-bionic-robot-oled-display-features-3249173</ref>. One of its distinct features is its ability to perceive 3D, recognize individuals, and respond to human emotions<ref>https://www.gadgets360.com/smart-home/news/xiaomi-cyberone-unveiled-specifications-humanoid-bionic-robot-oled-display-features-3249173</ref>. Furthermore, it boasts a top speed of 3.6 km/ph<ref>https://www.theverge.com/2022/8/16/23307808/xiaomi-cyberone-humanoid-robot-tesla-optimus-bot-specs-comparison</ref>.
== Pricing ==
The cost of CyberOne, if ever produced and made available for purchase, is estimated to be around 600,000 to 700,000 yuan<ref>https://robbreport.com/gear/electronics/xiaomi-humanoid-robot-cyberone-1234738597/</ref>.
[[Category:Robots]]
== References ==
<references />
19c136625b7759c54563be287f22db3fb4a19a65
PX5
0
111
1686
625
2024-07-06T21:52:25Z
Ben
2
wikitext
text/x-wiki
The PX5 is a humanoid robot developed by Xpeng, unveiled for the first time during Xpeng Motors' Tech Day in 2023.<ref>https://kr-asia.com/xpeng-motors-unveils-px5-humanoid-robot-underlining-its-vision-for-the-future</ref> The robot stands approximately 1.5 meters in height and is able to navigate different terrain and handle objects with precision, demonstrating remarkable stability.<ref>https://technode.com/2023/10/25/xpeng-tech-day-2023-first-mpv-self-driving-timeline-flying-cars-and-humanoid-robots/</ref> Constructed with a silver-white color scheme, the PX5 is also resistant to shock.<ref>https://kr-asia.com/xpeng-motors-unveils-px5-humanoid-robot-underlining-its-vision-for-the-future</ref>
{{infobox robot
| name = PX5
| organization = [[Xpeng]]
| height = 1.5 meters
| weight =
| video_link = https://www.youtube.com/watch?v=BNSZ8Fwcd20
| cost =
}}
<youtube>https://www.youtube.com/watch?v=BNSZ8Fwcd20</youtube>
== Development ==
Xpeng Robotics, an ecosystem company of Xpeng, specializing in smart robots, revealed the PX5. The company, which was founded in 2016, innovates in areas like robot powertrain, locomotion control, robot autonomy, robot interaction, and artificial intelligence contributing to a shared mission of exploring future mobility solutions.<ref>https://www.pxing.com/en/about</ref>
== Design and Capabilities ==
The PX5 bears a striking silver-white finish and exhibits resistance to shock. Its capability to navigate through different terrains and handle handheld objects, such as a pen, with exceptional stability has been highlighted in demonstrations.<ref>https://kr-asia.com/xpeng-motors-unveils-px5-humanoid-robot-underlining-its-vision-for-the-future</ref> <ref>https://technode.com/2023/10/25/xpeng-tech-day-2023-first-mpv-self-driving-timeline-flying-cars-and-humanoid-robots/</ref>
== References ==
<references />
[[Category:Robots]]
11fe68e066b6fe89def83c89e1530418982723ba
Valkyrie
0
117
1694
1647
2024-07-07T18:02:09Z
Ben
2
small cleanup
wikitext
text/x-wiki
NASA’s Valkyrie, also known as R5, is a robust, rugged, and entirely electric humanoid robot. It was designed and built by the Johnson Space Center (JSC) Engineering Directorate to compete in the 2013 DARPA Robotics Challenge (DRC) Trials.
<youtube>https://www.youtube.com/watch?v=LaYlQYHXJio</youtube>
{{infobox robot
| name = Valkyrie
| organization = [[NASA]]
| height = 190 cm
| weight = 125 kg
| video_link = https://www.youtube.com/watch?v=LaYlQYHXJio
}}
[[Category:Robots]]
01955b8fdce745d59aedf87289e62f126ca7c459
File:Robot taking notes.png
6
376
1695
2024-07-08T15:38:21Z
Ben
2
wikitext
text/x-wiki
Robot taking notes
907ccf64e98bfb5d3b7ead7d4b154f86e3319dd0
K-Scale Weekly Progress Updates
0
294
1696
1627
2024-07-08T15:38:43Z
Ben
2
wikitext
text/x-wiki
[[File:Robot taking notes.png|thumb|Robot taking notes (from Midjourney)]]
{| class="wikitable"
|-
! Link
|-
| [https://x.com/kscalelabs/status/1809263616958374286 2024.07.05]
|-
| [https://x.com/kscalelabs/status/1804184936574030284 2024.06.21]
|-
| [https://x.com/kscalelabs/status/1801749382167204086 2024.06.14]
|-
| [https://x.com/kscalelabs/status/1799197382208590132 2024.06.07]
|-
| [https://x.com/kscalelabs/status/1796617681455775944 2024.05.31]
|-
| [https://x.com/kscalelabs/status/1794109131214712914 2024.05.24]
|-
| [https://x.com/kscalelabs/status/1791507358780461496 2024.05.17]
|-
| [https://x.com/kscalelabs/status/1788968705378181145 2024.05.10]
|}
[[Category:K-Scale]]
ab8ec4a8d3c203cd44b334b55184bafbfb23b600
1720
1696
2024-07-12T16:52:25Z
Ben
2
wikitext
text/x-wiki
[[File:Robot taking notes.png|thumb|Robot taking notes (from Midjourney)]]
{| class="wikitable"
|-
! Link
|-
| [https://x.com/kscalelabs/status/1811805432505336073 2024.07.12]
|-
| [https://x.com/kscalelabs/status/1809263616958374286 2024.07.05]
|-
| [https://x.com/kscalelabs/status/1804184936574030284 2024.06.21]
|-
| [https://x.com/kscalelabs/status/1801749382167204086 2024.06.14]
|-
| [https://x.com/kscalelabs/status/1799197382208590132 2024.06.07]
|-
| [https://x.com/kscalelabs/status/1796617681455775944 2024.05.31]
|-
| [https://x.com/kscalelabs/status/1794109131214712914 2024.05.24]
|-
| [https://x.com/kscalelabs/status/1791507358780461496 2024.05.17]
|-
| [https://x.com/kscalelabs/status/1788968705378181145 2024.05.10]
|}
[[Category:K-Scale]]
5e8581190123da2179befb77f1d63c54860b696c
Main Page
0
1
1697
1644
2024-07-08T21:51:27Z
Vrtnis
21
/*Correct Figure AI wikilink*/
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
=== Discord community ===
[https://discord.gg/rhCy6UdBRD Discord]
54ba8b2ec9adbe65cb51f7ebfd0c3b9c47c505f6
1705
1697
2024-07-11T23:28:56Z
Vrtnis
21
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
=== Discord community ===
[https://discord.gg/rhCy6UdBRD Discord]
ea0d4924a37d38f19e1e0b8aa9b7a5a7d2315cd0
Beijing Humanoid Robot Innovation Center
0
377
1698
2024-07-09T20:56:11Z
Vrtnis
21
Created page with "Beijing has launched a new innovation center dedicated to humanoid robots, aiming to boost technological advancements and industrial growth. Located in the Beijing Economic-Te..."
wikitext
text/x-wiki
Beijing has launched a new innovation center dedicated to humanoid robots, aiming to boost technological advancements and industrial growth. Located in the Beijing Economic-Technological Development Area, this center will tackle key industry challenges and support the development of globally influential robot companies by 2025.
<ref>https://english.beijing.gov.cn/latest/news/202311/t20231105_3295012.html</ref>
e09c62ff5f44286885f46c035984e82e5ad2fb4f
Anthro
0
378
1699
2024-07-09T20:58:38Z
Vrtnis
21
Created page with "The Anthro™ is an anthropomorphic (humanoid) host for embodied synthetic intelligence (or AGI). <ref>https://anthrobotics.ca/the-anthro/</ref>"
wikitext
text/x-wiki
The Anthro™ is an anthropomorphic (humanoid) host for embodied synthetic intelligence (or AGI).
<ref>https://anthrobotics.ca/the-anthro/</ref>
e793ccd0e1c3165e6e930d27ff8f13c88a58605e
QDH
0
379
1700
2024-07-09T21:23:51Z
Vrtnis
21
Created page with "Apptronik’s latest upper body humanoid robot is designed to operate with and around humans. It has state-of-the-art actuation packed into a small form factor and can be put..."
wikitext
text/x-wiki
Apptronik’s latest upper body humanoid robot is designed to operate with and around humans. It has state-of-the-art actuation packed into a small form factor and can be put on any mobility platform.
<ref>https://apptronik.com/our-work</ref>
2bab8570a014a5938c7c61502d1eb2b89cae27a3
1701
1700
2024-07-09T21:24:20Z
Vrtnis
21
wikitext
text/x-wiki
[[Apptronik]]’s latest upper body humanoid robot is designed to operate with and around humans. It has state-of-the-art actuation packed into a small form factor and can be put on any mobility platform.
<ref>https://apptronik.com/our-work</ref>
d354462b3738851cc00288b839cfbc508cba5608
BR002
0
380
1702
2024-07-09T21:30:56Z
Vrtnis
21
Created page with " BR002 is a humanoid robot developed by Booster Robotics, designed to showcase advanced mobility and versatility. It can transition from a prone position to standing by foldin..."
wikitext
text/x-wiki
BR002 is a humanoid robot developed by Booster Robotics, designed to showcase advanced mobility and versatility. It can transition from a prone position to standing by folding its legs backward, similar to the method used by Boston Dynamics' Atlas. BR002 features complex motion control algorithms for its humanoid leg configurations, allowing it to perform lateral and vertical splits, with joints capable of 360° rotation, offering a wide range of motion.
<ref> https://equalocean.com/news/2024050820854</ref>
3acf27253c8f724f7aaed0c52f7603e97596524b
Skild
0
11
1703
219
2024-07-10T22:25:37Z
Vrtnis
21
wikitext
text/x-wiki
Skild is a stealth foundation model startup started by two faculty members from Carnegie Mellon University.
{{infobox company
| name = Skild
| country = United States
| website_link = https://www.skild.ai/
}}
=== Articles ===
* [https://www.theinformation.com/articles/venture-fomo-hits-robotics-as-young-startup-gets-1-5-billion-valuation Venture FOMO Hits Robotics as Young Startup Gets $1.5 Billion Valuation]
[[Category:Companies]]
e641ae138131c1ceb3f744bbf0d448761fe0e65a
Engineered Arts
0
381
1704
2024-07-11T23:27:01Z
Vrtnis
21
Created page with "'''Engineered Arts''' is a UK-based company specializing in the development of humanoid robots and advanced robotic systems. Known for their sophisticated and lifelike creatio..."
wikitext
text/x-wiki
'''Engineered Arts''' is a UK-based company specializing in the development of humanoid robots and advanced robotic systems. Known for their sophisticated and lifelike creations, they integrate cutting-edge technology to produce robots used in entertainment, research, and public engagement.<ref>https://engineeredarts.co.uk/</ref>
617d02d020f684da74dd2faaa652f7266edeabbd
Ameca
0
382
1706
2024-07-11T23:30:55Z
Vrtnis
21
Created page with "'''Ameca''' is a humanoid robot developed by Engineered Arts, known for its advanced human-like expressions and interactions. Designed to serve as a platform for artificial in..."
wikitext
text/x-wiki
'''Ameca''' is a humanoid robot developed by Engineered Arts, known for its advanced human-like expressions and interactions. Designed to serve as a platform for artificial intelligence and machine learning research, Ameca is equipped with state-of-the-art sensors and actuators to mimic human movements and emotions.<ref>https://engineeredarts.co.uk/robot/ameca/</ref>
fc4b0d6c57bfac63c19d57684c6c9fa570cc3840
Kuavo (Kuafu)
0
383
1707
2024-07-11T23:32:28Z
Vrtnis
21
Created page with "'''Kuavo (Kuafu)''' is a humanoid robot jointly developed by Haier Robot and Leju Robot, and was first exhibited at AWE 2024. As China's first open-source Hongmeng humanoid ro..."
wikitext
text/x-wiki
'''Kuavo (Kuafu)''' is a humanoid robot jointly developed by Haier Robot and Leju Robot, and was first exhibited at AWE 2024. As China's first open-source Hongmeng humanoid robot designed for family scenes, Kuavo is capable of jumping, multi-terrain walking, and performing tasks such as washing, watering flowers, and drying clothes. The development of Kuavo follows a strategic cooperation between Haier Home Robot and Leju Robot to integrate AI and robotics into smart home environments.<ref>http://www.datayuan.cn/article/20862.htm</ref>
2ad07dc797fe79c2e83651fcc5a34099c5569502
Shadow-1
0
384
1708
2024-07-11T23:35:54Z
Vrtnis
21
Created page with "'''Shadow-1''' is Hyperspawn's flagship open-source robot, designed to perform complex tasks with human-like precision and efficiency, particularly suited for exploration and..."
wikitext
text/x-wiki
'''Shadow-1''' is Hyperspawn's flagship open-source robot, designed to perform complex tasks with human-like precision and efficiency, particularly suited for exploration and task execution in extreme conditions. This advanced bipedal robot integrates cutting-edge technology to navigate and operate in challenging environments, showcasing Hyperspawn's commitment to innovation in robotic performance and reliability.<ref>https://www.hyperspawn.co/bipeds</ref>
ded51a30de533cd3d41ca40594331c3f2666699f
NEURA Robotics
0
385
1709
2024-07-11T23:37:30Z
Vrtnis
21
Created page with "'''NEURA Robotics''' is a German high-tech company founded in 2019 in Metzingen near Stuttgart, dedicated to enhancing collaborative robots with cognitive capabilities for saf..."
wikitext
text/x-wiki
'''NEURA Robotics''' is a German high-tech company founded in 2019 in Metzingen near Stuttgart, dedicated to enhancing collaborative robots with cognitive capabilities for safer and more efficient human-robot interaction. With a team of over 170 members from 30+ countries, NEURA Robotics has developed innovations like MAiRA, the world’s first cognitive robot, and MiPA, a versatile robotic assistant.<ref>https://neura-robotics.com/company</ref>
e65efc64d9bd1df338e047b89ce24d0c276b5277
Kayra.org
0
386
1710
2024-07-11T23:38:47Z
Vrtnis
21
Created page with "'''Kayra''' is an open-source, easy-to-modify, 3D-printable humanoid robot. Designed for accessibility and customization, Kayra aims to make advanced robotics more approachabl..."
wikitext
text/x-wiki
'''Kayra''' is an open-source, easy-to-modify, 3D-printable humanoid robot. Designed for accessibility and customization, Kayra aims to make advanced robotics more approachable for enthusiasts and researchers.<ref>https://kayra.org</ref>
6551581e8abb54f61ffbe992235213124ae7f7b1
Noetix
0
387
1711
2024-07-11T23:40:10Z
Vrtnis
21
Created page with "'''Noetix Robotics''' focuses on the development and manufacturing of humanoid robots, integrating fields such as general artificial intelligence, robot bionics, and embodied..."
wikitext
text/x-wiki
'''Noetix Robotics''' focuses on the development and manufacturing of humanoid robots, integrating fields such as general artificial intelligence, robot bionics, and embodied operating systems. Their work aims to advance the capabilities and applications of humanoid robotics.<ref>https://noetixrobotics.com/</ref>
b2435a008e3124105c5b36349b9d16bbb38a44d8
Dora
0
388
1712
2024-07-11T23:42:00Z
Vrtnis
21
Created page with "'''Dora''' is a humanoid robot developed by Noetix Robotics, a start-up technology company established in Beijing in September 2023. Dora, showcased at the World Artificial In..."
wikitext
text/x-wiki
'''Dora''' is a humanoid robot developed by Noetix Robotics, a start-up technology company established in Beijing in September 2023. Dora, showcased at the World Artificial Intelligence Conference, is China's first lightweight, commercialized general-purpose humanoid robot. It is designed for various applications, including scientific research, education, exhibition display, and service companionship.<ref>http://www.azchinesenews1.com/static/content/XW/2024-07-05/1258803435970535978.html</ref>
628e5e3d0534800c04a54625cbbed5b418fd8046
POINTBLANK
0
389
1713
2024-07-11T23:43:58Z
Vrtnis
21
Created page with "'''POINTBLANK''' is a company specializing in rapid prototyping, hardware development, open-source projects, and robotics. They are dedicated to advancing technology through i..."
wikitext
text/x-wiki
'''POINTBLANK''' is a company specializing in rapid prototyping, hardware development, open-source projects, and robotics. They are dedicated to advancing technology through innovative solutions and accessible designs.<ref>https://www.pointblankllc.com/</ref>
96c5cabfc124baed7c12f0d13f5ceaf749ee5a79
DROPBEAR
0
390
1714
2024-07-11T23:45:05Z
Vrtnis
21
Created page with "'''Dropbear''' is an advanced humanoid robot developed by Hyperspawn and Pointblank. Designed to operate in varied environments, Dropbear showcases agility, precision, and int..."
wikitext
text/x-wiki
'''Dropbear''' is an advanced humanoid robot developed by Hyperspawn and Pointblank. Designed to operate in varied environments, Dropbear showcases agility, precision, and intelligence.<ref>https://github.com/Hyperspawn/Dropbear</ref>
58c50090335a51faafdcb299a6546bd1ff5f9732
Robotera
0
391
1715
2024-07-11T23:46:40Z
Vrtnis
21
Created page with "'''Robotera''' was established in August 2023 and focuses on developing universal humanoid robots for various fields. Their latest product, the XBot-L, is a full-size humanoid..."
wikitext
text/x-wiki
'''Robotera''' was established in August 2023 and focuses on developing universal humanoid robots for various fields. Their latest product, the XBot-L, is a full-size humanoid robot capable of navigating complex terrains and performing advanced tasks with high flexibility and accuracy. Xingdong Jiyuan or Robotera has received multiple prestigious awards and significant media attention, showcasing its innovation and leadership in the humanoid robot industry.<ref>https://www.robotera.com/</ref>
1f6ee75421a4e81b1903e9543b89934cce33b926
Starbot
0
392
1716
2024-07-11T23:48:39Z
Vrtnis
21
Created page with "'''Starbot''' is a state-of-the-art humanoid robot developed by Professor Jianyu Chen's startup, RobotEra, in collaboration with his research team at Tsinghua University. Star..."
wikitext
text/x-wiki
'''Starbot''' is a state-of-the-art humanoid robot developed by Professor Jianyu Chen's startup, RobotEra, in collaboration with his research team at Tsinghua University. Starbot boasts exceptional motion capabilities, including traversing snowy terrains, navigating staircases, and robust interference resistance, all controlled by a single neural network trained via end-to-end reinforcement learning.<ref>https://mobile.x.com/roboterax/status/1742443820384718930</ref>
01c09e7beaf845425721960d8d86031e57d73774
Navigator α
0
393
1717
2024-07-11T23:49:55Z
Vrtnis
21
Created page with "'''Navigator α''' is a newly launched humanoid robot standing 1.5 meters tall and weighing 50 kilograms. It features advanced hardware, including a planetary reducer, lightwe..."
wikitext
text/x-wiki
'''Navigator α''' is a newly launched humanoid robot standing 1.5 meters tall and weighing 50 kilograms. It features advanced hardware, including a planetary reducer, lightweight humanoid arms, and a dexterous hand with 15 finger joints and six active degrees of freedom. Navigator α integrates mechanism control, imitation learning, and reinforcement learning, utilizing large-scale AI models for significant advancements in hardware and algorithms.<ref>https://www.prnewswire.com/news-releases/navigator--humanoid-robot-supcon-integrates-ai-with-robotics-302107204.html</ref>
0777669abea3e020f2ac5e64436e5b17f3ce26c2
UC Berkeley
0
394
1718
2024-07-11T23:51:22Z
Vrtnis
21
Created page with "'''Berkeley Control, Intelligent Systems, and Robotics (CIR)''' at EECS focuses on modeling systems and machines to ensure appropriate responses to inputs, utilizing optimizat..."
wikitext
text/x-wiki
'''Berkeley Control, Intelligent Systems, and Robotics (CIR)''' at EECS focuses on modeling systems and machines to ensure appropriate responses to inputs, utilizing optimization and mathematical techniques. Research spans semiconductor process control, hybrid and networked control, nonlinear and learning control, and involves collaboration with Mechanical Engineering and Integrative Biology. The robotics research covers mobile autonomous systems, human augmentation, telepresence, and virtual reality, with an emphasis on image understanding and computer vision.<ref>https://www2.eecs.berkeley.edu/Research/Areas/CIR/</ref>
d3bc9a2d3a600e59f2f079f835e17c5180549c9d
University of Tehran
0
395
1719
2024-07-11T23:52:42Z
Vrtnis
21
Created page with "The Robotics Lab at the University of Tehran focuses on advanced research in robotics, including autonomous systems, human-robot interaction, and artificial intelligence. It a..."
wikitext
text/x-wiki
The Robotics Lab at the University of Tehran focuses on advanced research in robotics, including autonomous systems, human-robot interaction, and artificial intelligence. It aims to develop innovative robotic solutions for industrial and service applications.<ref>https://robotics.ut.ac.ir/</ref>
b434d796e55df9577f472075a0d0dda2db75d34d
K-Scale Lecture Circuit
0
299
1721
1591
2024-07-14T01:18:50Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
|-
| 2024.07.18
| Wesley
| Consistency Models
|-
| 2024.07.17
| Nathan
| Flow Matching
|-
| 2024.07.17
| Dennis
| Diffusion Models
|-
| 2024.07.16
| Isaac
| Convolutional Neural Networks
|-
| 2024.07.15
| Allen
| RNNs, LSTMs, GRUs
|-
| 2024.06.28
| Kenji
| Principles of Power Electronics
|-
| 2024.06.27
| Paweł
| OpenVLA
|-
| 2024.06.26
| Ryan
| Introduction to KiCAD
|-
| 2024.06.24
| Dennis
| Live Streaming Protocols
|-
| 2024.06.21
| Nathan
| Quantization
|-
| 2024.06.20
| Timothy
| Diffusion
|-
| 2024.06.19
| Allen
| Neural Network Loss Functions
|-
| 2024.06.18
| Ben
| Neural network inference on the edge
|-
| 2024.06.17
| Matt
| CAD software deep dive
|-
| 2024.06.14
| Allen
| [https://github.com/karpathy/llm.c llm.c]
|-
| 2024.06.13
| Kenji
| Engineering principles of BLDCs
|-
| 2024.06.12
| Ryan
| 3-phase motors
|-
| 2024.06.11
| Vedant
| CAN Protocol
|-
| 2024.06.10
| Isaac
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech Papers Round 2]
|-
| 2024.06.07
| Ben
| Quantization
|-
| 2024.06.06
| Tom
| Linux Raw
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|-
| 2024.06.04
| Dennis
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech representation learning papers]
|-
| 2024.06.03
| Paweł
| What I (want to) believe in
|-
| 2024.05.30
| Isaac
| VLMs
|-
| 2024.05.29
| Allen
| PPO
|}
[[Category: K-Scale]]
e783ead99b090b2d6b918a5adce4406f044e3abb
1722
1721
2024-07-14T01:19:06Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
|-
| 2024.07.19
| Wesley
| Consistency Models
|-
| 2024.07.18
| Nathan
| Flow Matching
|-
| 2024.07.17
| Dennis
| Diffusion Models
|-
| 2024.07.16
| Isaac
| Convolutional Neural Networks
|-
| 2024.07.15
| Allen
| RNNs, LSTMs, GRUs
|-
| 2024.06.28
| Kenji
| Principles of Power Electronics
|-
| 2024.06.27
| Paweł
| OpenVLA
|-
| 2024.06.26
| Ryan
| Introduction to KiCAD
|-
| 2024.06.24
| Dennis
| Live Streaming Protocols
|-
| 2024.06.21
| Nathan
| Quantization
|-
| 2024.06.20
| Timothy
| Diffusion
|-
| 2024.06.19
| Allen
| Neural Network Loss Functions
|-
| 2024.06.18
| Ben
| Neural network inference on the edge
|-
| 2024.06.17
| Matt
| CAD software deep dive
|-
| 2024.06.14
| Allen
| [https://github.com/karpathy/llm.c llm.c]
|-
| 2024.06.13
| Kenji
| Engineering principles of BLDCs
|-
| 2024.06.12
| Ryan
| 3-phase motors
|-
| 2024.06.11
| Vedant
| CAN Protocol
|-
| 2024.06.10
| Isaac
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech Papers Round 2]
|-
| 2024.06.07
| Ben
| Quantization
|-
| 2024.06.06
| Tom
| Linux Raw
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|-
| 2024.06.04
| Dennis
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech representation learning papers]
|-
| 2024.06.03
| Paweł
| What I (want to) believe in
|-
| 2024.05.30
| Isaac
| VLMs
|-
| 2024.05.29
| Allen
| PPO
|}
[[Category: K-Scale]]
7cea6ba1ca6b7728f827a94cd0b8199b37a9f8fb
Expressive Whole-Body Control for Humanoid Robots
0
396
1723
2024-07-18T16:31:52Z
Vrtnis
21
Created page with "= Expressive Whole-Body Control for Humanoid Robots = == Installation == <code> conda create -n humanoid python=3.8 conda activate humanoid cd pip3 install torch==1.10.0+cu11..."
wikitext
text/x-wiki
= Expressive Whole-Body Control for Humanoid Robots =
== Installation ==
<code>
conda create -n humanoid python=3.8
conda activate humanoid
cd
pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
git clone git@github.com:chengxuxin/expressive_humanoid_covariant.git
cd expressive_humanoid_covariant
# Download the Isaac Gym binaries from https://developer.nvidia.com/isaac-gym
cd isaacgym/python && pip install -e .
cd ~/expressive_humanoid_covariant/rsl_rl && pip install -e .
cd ~/expressive_humanoid_covariant/legged_gym && pip install -e .
pip install "numpy<1.24" pydelatin wandb tqdm opencv-python ipdb pyfqmr flask dill gdown
</code>
Next install fbx. Follow the instructions here.
== Prepare dataset ==
Download from here and extract the zip file to ASE/ase/poselib/data/cmu_fbx_all that contains all .fbx files.
Generate .yaml file for the motions you want to use.
<code>
cd ASE/ase/poselib
python parse_cmu_mocap_all.py
</code>
This step is not mandatory because the .yaml file is already generated. But if you want to add more motions, you can use this script to generate the .yaml file.
== Import motions ==
<code>
cd ASE/ase/poselib
python fbx_importer_all.py
</code>
This will import all motions in CMU Mocap dataset into ASE/ase/poselib/data/npy.
== Retarget motions ==
<code>
cd ASE/ase/poselib
mkdir pkl retarget_npy
python retarget_motion_h1_all.py
</code>
This will retarget all motions in ASE/ase/poselib/data/npy to ASE/ase/poselib/data/retarget_npy.
== Generate keybody positions ==
This step will require running simulation to extract more precise key body positions.
<code>
cd legged_gym/legged_gym/scripts
python train.py debug --task h1_view --motion_name motions_debug.yaml --debug
</code>
Train for 1 iteration and kill the program to have a dummy model to load.
<code>
python play.py debug --task h1_view --motion_name motions_autogen_all.yaml
</code>
It is recommended to use motions_autogen_all.yaml at the first time, so that later if you have a subset it is not necessary to regenerate keybody positions. This will generate keybody positions to ASE/ase/poselib/data/retarget_npy. Set wandb asset:
== Usage ==
To train a new policy:
<code>
python train.py xxx-xx-some_descriptions_of_run --device cuda:0 --entity WANDB_ENTITY
</code>
xxx-xx is usually an id like 000-01. motion_type and motion_name are defined in legged_gym/legged_gym/envs/h1/h1_mimic_config.py. They can be also given as arguments. Can set default WANDB_ENTITY in legged_gym/legged_gym/utils/helpers.py.
To play a policy:
<code>
python play.py xxx-xx
</code>
No need to write the full experiment id. The parser will auto match runs with first 6 strings (xxx-xx). So better make sure you don't reuse xxx-xx. Delay is added after 8k iters. If you want to play after 8k, add --delay.
To play with example pretrained models:
<code>
python play.py 060-40 --delay --motion_name motions_debug.yaml
</code>
Try to press + or - to see different motions. The motion name will be printed on terminal. motions_debug.yaml is a small subset of motions for debugging and contains some representative motions.
Save models for deployment:
<code>
python save_jit.py --exptid xxx-xx
</code>
This will save the models in legged_gym/logs/parkour_new/xxx-xx/traced/.
== Viewer Usage ==
Can be used in both IsaacGym and web viewer.
* ALT + Mouse Left + Drag Mouse: move view.
* [ ]: switch to next/prev robot.
* Space: pause/unpause.
* F: switch between free camera and following camera.
=== IsaacGym viewer specific ===
* +: next motion in yaml.
* -: prev motion in yaml.
* r: reset the motion to the beginning.
* ]: camera focus on next env.
* [: camera focus on prev env.
== Arguments ==
* --exptid: string, can be xxx-xx-WHATEVER, xxx-xx is typically numbers only. WHATEVER is the description of the run.
* --device: can be cuda:0, cpu, etc.
* --delay: whether add delay or not.
* --checkpoint: the specific checkpoint you want to load. If not specified load the latest one.
* --resume: resume from another checkpoint, used together with --resumeid.
* --seed: random seed.
* --no_wandb: no wandb logging.
* --entity: specify wandb entity
* --web: used for playing on headless machines. It will forward a port with vscode and you can visualize seamlessly in vscode with your idle gpu or cpu. Live Preview vscode extension required, otherwise you can view it in any browser.
* --motion_name: e.g. 07_04 or motions_all.yaml. If motions_all.yaml is used, motion_type should be yaml.
* --motion_type: single or yaml
* --fix_base: fix the base of the robot.
For more arguments, refer legged_gym/utils/helpers.py.
5fbe025d1f9569567fed8d882fb371c5d9a75120
1724
1723
2024-07-18T16:35:13Z
Vrtnis
21
/*Add Refs*/
wikitext
text/x-wiki
= Expressive Whole-Body Control for Humanoid Robots =
== Installation ==
<code>
conda create -n humanoid python=3.8
conda activate humanoid
cd
pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
git clone git@github.com:chengxuxin/expressive_humanoid_covariant.git
cd expressive_humanoid_covariant
# Download the Isaac Gym binaries from https://developer.nvidia.com/isaac-gym
cd isaacgym/python && pip install -e .
cd ~/expressive_humanoid_covariant/rsl_rl && pip install -e .
cd ~/expressive_humanoid_covariant/legged_gym && pip install -e .
pip install "numpy<1.24" pydelatin wandb tqdm opencv-python ipdb pyfqmr flask dill gdown
</code>
Next install fbx. Follow the instructions here.
== Prepare dataset ==
Download from here and extract the zip file to ASE/ase/poselib/data/cmu_fbx_all that contains all .fbx files.
Generate .yaml file for the motions you want to use.
<code>
cd ASE/ase/poselib
python parse_cmu_mocap_all.py
</code>
This step is not mandatory because the .yaml file is already generated. But if you want to add more motions, you can use this script to generate the .yaml file.
== Import motions ==
<code>
cd ASE/ase/poselib
python fbx_importer_all.py
</code>
This will import all motions in CMU Mocap dataset into ASE/ase/poselib/data/npy.
== Retarget motions ==
<code>
cd ASE/ase/poselib
mkdir pkl retarget_npy
python retarget_motion_h1_all.py
</code>
This will retarget all motions in ASE/ase/poselib/data/npy to ASE/ase/poselib/data/retarget_npy.
== Generate keybody positions ==
This step will require running simulation to extract more precise key body positions.
<code>
cd legged_gym/legged_gym/scripts
python train.py debug --task h1_view --motion_name motions_debug.yaml --debug
</code>
Train for 1 iteration and kill the program to have a dummy model to load.
<code>
python play.py debug --task h1_view --motion_name motions_autogen_all.yaml
</code>
It is recommended to use motions_autogen_all.yaml at the first time, so that later if you have a subset it is not necessary to regenerate keybody positions. This will generate keybody positions to ASE/ase/poselib/data/retarget_npy. Set wandb asset:
== Usage ==
To train a new policy:
<code>
python train.py xxx-xx-some_descriptions_of_run --device cuda:0 --entity WANDB_ENTITY
</code>
xxx-xx is usually an id like 000-01. motion_type and motion_name are defined in legged_gym/legged_gym/envs/h1/h1_mimic_config.py. They can be also given as arguments. Can set default WANDB_ENTITY in legged_gym/legged_gym/utils/helpers.py.
To play a policy:
<code>
python play.py xxx-xx
</code>
No need to write the full experiment id. The parser will auto match runs with first 6 strings (xxx-xx). So better make sure you don't reuse xxx-xx. Delay is added after 8k iters. If you want to play after 8k, add --delay.
To play with example pretrained models:
<code>
python play.py 060-40 --delay --motion_name motions_debug.yaml
</code>
Try to press + or - to see different motions. The motion name will be printed on terminal. motions_debug.yaml is a small subset of motions for debugging and contains some representative motions.
Save models for deployment:
<code>
python save_jit.py --exptid xxx-xx
</code>
This will save the models in legged_gym/logs/parkour_new/xxx-xx/traced/.
== Viewer Usage ==
Can be used in both IsaacGym and web viewer.
* ALT + Mouse Left + Drag Mouse: move view.
* [ ]: switch to next/prev robot.
* Space: pause/unpause.
* F: switch between free camera and following camera.
=== IsaacGym viewer specific ===
* +: next motion in yaml.
* -: prev motion in yaml.
* r: reset the motion to the beginning.
* ]: camera focus on next env.
* [: camera focus on prev env.
== Arguments ==
* --exptid: string, can be xxx-xx-WHATEVER, xxx-xx is typically numbers only. WHATEVER is the description of the run.
* --device: can be cuda:0, cpu, etc.
* --delay: whether add delay or not.
* --checkpoint: the specific checkpoint you want to load. If not specified load the latest one.
* --resume: resume from another checkpoint, used together with --resumeid.
* --seed: random seed.
* --no_wandb: no wandb logging.
* --entity: specify wandb entity
* --web: used for playing on headless machines. It will forward a port with vscode and you can visualize seamlessly in vscode with your idle gpu or cpu. Live Preview vscode extension required, otherwise you can view it in any browser.
* --motion_name: e.g. 07_04 or motions_all.yaml. If motions_all.yaml is used, motion_type should be yaml.
* --motion_type: single or yaml
* --fix_base: fix the base of the robot.
For more arguments, refer legged_gym/utils/helpers.py.
== References ==
* [https://expressive-humanoid.github.io/ Expressive Whole-Body Control for Humanoid Robots]
* [https://chengxuxin.github.io/ Xuxin Cheng]
* [https://yandongji.github.io/ Yandong Ji]
* [https://jeremycjm.github.io/ Junming Chen]
* [https://www.episodeyang.com/ Ge Yang]
* [https://xiaolonw.github.io/ Xiaolong Wang]
* Publisher: UC San Diego
* Year: 2024
* [https://expressive-humanoid.github.io/ Website]
* [https://arxiv.org/abs/2402.16796 arXiv]
5d8cd4bff5db14dda776ad4d2cba0962e80bbd6f
Robotic Control via Embodied Chain-of-Thought Reasoning
0
397
1725
2024-07-18T20:01:41Z
Vrtnis
21
Created page with "= Robotic Control via Embodied Chain-of-Thought Reasoning = The codebase is built on top of OpenVLA. We refer to it for the detailed documentation of the code and dependencie..."
wikitext
text/x-wiki
= Robotic Control via Embodied Chain-of-Thought Reasoning =
The codebase is built on top of OpenVLA. We refer to it for the detailed documentation of the code and dependencies.
== Quickstart ==
We provide a Colab notebook containing code for loading up our ECoT policy and using it to generate reasoning and actions in response to an observation. Loading the model for inference is easy:
<code>
from transformers import AutoModelForVision2Seq, AutoProcessor
device = "cuda"
path_to_hf = "Embodied-CoT/ecot-openvla-7b-bridge"
processor = AutoProcessor.from_pretrained(path_to_hf, trust_remote_code=True)
vla = AutoModelForVision2Seq.from_pretrained(path_to_hf, torch_dtype=torch.bfloat16).to(device)
observation = <ROBOT IMAGE OBSERVATION HERE>
instruction = <YOUR INSTRUCTION HERE>
prompt = "A chat between a curious user and an artificial intelligence assistant. " + \
"The assistant gives helpful, detailed, and polite answers to the user's questions. " + \
f"USER: What action should the robot take to {instruction.lower()}? ASSISTANT: TASK:"
inputs = processor(prompt, image).to(device, dtype=torch.bfloat16)
action, generated_ids = vla.predict_action(**inputs, unnorm_key="bridge_orig", max_new_tokens=1024)
generated_text = processor.batch_decode(generated_ids)[0]
</code>
The standard model in torch.bfloat16 requires 16 GB of GPU memory, but using bitsandbytes and 4-bit quantization lowers memory usage to around 5 GB. See the Colab for more details.
== Training and Evaluation ==
To train the models from scratch, use the following command:
<code>
torchrun --standalone --nnodes 1 --nproc-per-node 8 vla-scripts/train.py \
--vla.type "prism-dinosiglip-224px+mx-bridge" \
--data_root_dir <path to training data root> \
--run_root_dir <path to checkpoint saving directory> \
--wandb_project <wandb project name> \
--wandb_entity <wandb user name>
</code>
To evaluate the model on the WidowX robot:
<code>
python3 experiments/bridge/eval_model_in_bridge_env.py
--model.type prism-dinosiglip-224px+7b
--pretrained_checkpoint <path to checkpoint>
--host_ip <robot interface IP>
--port <robot interface port>
</code>
== Pretrained models ==
We release two ECoT models trained as part of our work, and the dataset of reasonings, available on our HuggingFace page:
* '''ecot-openvla-7b-bridge''': The main model that we used for most of our experiments. It was trained on the Bridge dataset annotated with the reasoning for 80k steps.
* '''ecot-openvla-7b-oxe''': A policy that was initially trained on the Open-X-Embodiment dataset actions, fine-tuned on the mixture of OXE action-only data and our reasonings for Bridge for another 20k steps.
* '''embodied_features_bridge''': A dataset of the embodied features and reasonings collected for Bridge demonstrations.
=== Explicit Notes on Model Licensing & Commercial Use ===
While all code in this repository is released under an MIT License, our pretrained models may inherit restrictions from the underlying base models we use. Specifically, both the above models are derived from Llama-2, and as such are subject to the Llama Community License.
== Installation ==
See the original OpenVLA repository for detailed installation instructions.
== Repository Structure ==
High-level overview of repository/project file-tree:
* '''prismatic''': Package source; provides core utilities for model loading, training, data preprocessing, etc.
* '''experiments''': Code for evaluating the policies on a WidowX robot.
* '''vla-scripts/''': Core scripts for training, fine-tuning, and deploying VLAs.
* '''LICENSE''': All code is made available under the MIT License; happy hacking!
* '''Makefile''': Top-level Makefile (by default, supports linting - checking & auto-fix); extend as needed.
* '''pyproject.toml''': Full project configuration details (including dependencies), as well as tool configurations.
* '''README.md''': You are here!
== Citation ==
If you find our code or models useful in your work, please cite our paper:
<code>
@article{Zawalski24-ecot,
title={Robotic Control via Embodied Chain-of-Thought Reasoning},
author={Michał Zawalski and William Chen and Karl Pertsch and Oier Mees and Chelsea Finn and Sergey Levine},
journal={arXiv preprint arXiv:2407.08693},
year={2024}
}
</code>
6ff636270dbe3f66e88bfeb453ae2515e2342f9c
1726
1725
2024-07-18T20:05:49Z
Vrtnis
21
wikitext
text/x-wiki
= Robotic Control via Embodied Chain-of-Thought Reasoning =
Embodied Chain-of-Thought Reasoning (ECoT) is a novel approach for training robotic policies. This approach trains a vision-language-action model to generate reasoning steps in response to instructions and images before choosing a robot action, enabling better performance, interpretability, and generalization.
The codebase is built on top of OpenVLA. Refer to it for detailed documentation of the code and dependencies.
== Quickstart ==
A Colab notebook is provided containing code for loading the ECoT policy and using it to generate reasoning and actions in response to an observation. Loading the model for inference is easy:
<code>
from transformers import AutoModelForVision2Seq, AutoProcessor
device = "cuda"
path_to_hf = "Embodied-CoT/ecot-openvla-7b-bridge"
processor = AutoProcessor.from_pretrained(path_to_hf, trust_remote_code=True)
vla = AutoModelForVision2Seq.from_pretrained(path_to_hf, torch_dtype=torch.bfloat16).to(device)
observation = <ROBOT IMAGE OBSERVATION HERE>
instruction = <YOUR INSTRUCTION HERE>
prompt = "A chat between a curious user and an artificial intelligence assistant. " + \
"The assistant gives helpful, detailed, and polite answers to the user's questions. " + \
f"USER: What action should the robot take to {instruction.lower()}? ASSISTANT: TASK:"
inputs = processor(prompt, image).to(device, dtype=torch.bfloat16)
action, generated_ids = vla.predict_action(**inputs, unnorm_key="bridge_orig", max_new_tokens=1024)
generated_text = processor.batch_decode(generated_ids)[0]
</code>
The standard model in torch.bfloat16 requires 16 GB of GPU memory, but using bitsandbytes and 4-bit quantization lowers memory usage to around 5 GB. See the Colab for more details.
== Training and Evaluation ==
To train the models from scratch, use the following command:
<code>
torchrun --standalone --nnodes 1 --nproc-per-node 8 vla-scripts/train.py \
--vla.type "prism-dinosiglip-224px+mx-bridge" \
--data_root_dir <path to training data root> \
--run_root_dir <path to checkpoint saving directory> \
--wandb_project <wandb project name> \
--wandb_entity <wandb user name>
</code>
To evaluate the model on the WidowX robot:
<code>
python3 experiments/bridge/eval_model_in_bridge_env.py
--model.type prism-dinosiglip-224px+7b
--pretrained_checkpoint <path to checkpoint>
--host_ip <robot interface IP>
--port <robot interface port>
</code>
== Pretrained models ==
Two ECoT models and a dataset of reasonings are available on the HuggingFace page:
* '''ecot-openvla-7b-bridge''': The main model used for most of the experiments. It was trained on the Bridge dataset annotated with the reasoning for 80k steps.
* '''ecot-openvla-7b-oxe''': A policy initially trained on the Open-X-Embodiment dataset actions, fine-tuned on the mixture of OXE action-only data and reasonings for Bridge for another 20k steps.
* '''embodied_features_bridge''': A dataset of the embodied features and reasonings collected for Bridge demonstrations.
=== Explicit Notes on Model Licensing & Commercial Use ===
While all code in this repository is released under an MIT License, the pretrained models may inherit restrictions from the underlying base models. Specifically, both models are derived from Llama-2, and are subject to the Llama Community License.
== Installation ==
Refer to the original OpenVLA repository for detailed installation instructions.
== Repository Structure ==
High-level overview of repository/project file-tree:
* '''prismatic''': Package source; provides core utilities for model loading, training, data preprocessing, etc.
* '''experiments''': Code for evaluating the policies on a WidowX robot.
* '''vla-scripts/''': Core scripts for training, fine-tuning, and deploying VLAs.
* '''LICENSE''': All code is made available under the MIT License.
* '''Makefile''': Top-level Makefile (by default, supports linting - checking & auto-fix); extend as needed.
* '''pyproject.toml''': Full project configuration details (including dependencies), as well as tool configurations.
* '''README.md''': You are here!
== Citation ==
If the code or models are useful in your work, please cite the paper:
<code>
@article{Zawalski24-ecot,
title={Robotic Control via Embodied Chain-of-Thought Reasoning},
author={Michał Zawalski and William Chen and Karl Pertsch and Oier Mees and Chelsea Finn and Sergey Levine},
journal={arXiv preprint arXiv:2407.08693},
year={2024}
}
</code>
5e6cd6c45c7c03e97f8d2fa94d421265aa0dfd99
Humanoid Gym
0
275
1727
1186
2024-07-18T23:01:25Z
Vrtnis
21
wikitext
text/x-wiki
Humanoid-Gym is an advanced reinforcement learning (RL) framework built on Nvidia Isaac Gym, designed for training locomotion skills in humanoid robots. Notably, it emphasizes zero-shot transfer, enabling skills learned in simulation to be directly applied to real-world environments without additional adjustments.
[https://github.com/roboterax/humanoid-gym GitHub]
== Demo ==
This codebase is verified by RobotEra's XBot-S (1.2 meter tall humanoid robot) and XBot-L (1.65 meter tall humanoid robot) in a real-world environment with zero-shot sim-to-real transfer.
== Features ==
=== 1. Humanoid Robot Training ===
This repository offers comprehensive guidance and scripts for the training of humanoid robots. Humanoid-Gym features specialized rewards for humanoid robots, simplifying the difficulty of sim-to-real transfer. In this repository, RobotEra's XBot-L is used as a primary example, but it can also be used for other robots with minimal adjustments. Resources cover setup, configuration, and execution, aiming to fully prepare the robot for real-world locomotion by providing in-depth training and optimization.
* Comprehensive Training Guidelines: Thorough walkthroughs for each stage of the training process.
* Step-by-Step Configuration Instructions: Clear and succinct guidance ensuring an efficient setup process.
* Execution Scripts for Easy Deployment: Pre-prepared scripts to streamline the training workflow.
=== 2. Sim2Sim Support ===
Humanoid-Gym includes a sim2sim pipeline, allowing the transfer of trained policies to highly accurate and carefully designed simulated environments. Once the robot is acquired, the RL-trained policies can be confidently deployed in real-world settings.
Simulator settings, particularly with Mujoco, are finely tuned to closely mimic real-world scenarios. This careful calibration ensures that performances in both simulated and real-world environments are closely aligned, making simulations more trustworthy and enhancing their applicability to real-world scenarios.
=== 3. Denoising World Model Learning (Coming Soon!) ===
Denoising World Model Learning (DWL) presents an advanced sim-to-real framework integrating state estimation and system identification. This dual-method approach ensures the robot's learning and adaptation are both practical and effective in real-world contexts.
* Enhanced Sim-to-real Adaptability: Techniques to optimize the robot's transition from simulated to real environments.
* Improved State Estimation Capabilities: Advanced tools for precise and reliable state analysis.
=== Dexterous Hand Manipulation (Coming Soon!) ===
== Installation ==
Generate a new Python virtual environment with Python 3.8:
<code>
conda create -n myenv python=3.8
</code>
For best performance, it is recommended to use NVIDIA driver version 525:
<code>
sudo apt install nvidia-driver-525
</code>
The minimal driver version supported is 515. If unable to install version 525, ensure that the system has at least version 515 to maintain basic functionality.
Install PyTorch 1.13 with Cuda-11.7:
<code>
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.7 -c pytorch -c nvidia
</code>
Install numpy-1.23:
<code>
conda install numpy=1.23
</code>
Install Isaac Gym:
Download and install Isaac Gym Preview 4 from [https://developer.nvidia.com/isaac-gym].
<code>
cd isaacgym/python && pip install -e .
</code>
Run an example:
<code>
cd examples && python 1080_balls_of_solitude.py
</code>
Consult isaacgym/docs/index.html for troubleshooting.
Install humanoid-gym:
Clone this repository:
<code>
cd humanoid-gym && pip install -e .
</code>
== Usage Guide ==
=== Examples ===
# Launching PPO Policy Training for 'v1' Across 4096 Environments
<code>
python scripts/train.py --task=humanoid_ppo --run_name v1 --headless --num_envs 4096
</code>
# Evaluating the Trained PPO Policy 'v1'
<code>
python scripts/play.py --task=humanoid_ppo --run_name v1
</code>
# Implementing Simulation-to-Simulation Model Transformation
<code>
python scripts/sim2sim.py --load_model /path/to/logs/XBot_ppo/exported/policies/policy_1.pt
</code>
# Run our trained policy
<code>
python scripts/sim2sim.py --load_model /path/to/logs/XBot_ppo/exported/policies/policy_example.pt
</code>
=== 1. Default Tasks ===
* humanoid_ppo
* Purpose: Baseline, PPO policy, Multi-frame low-level control
* Observation Space: Variable dimensions, where is the number of frames
* Privileged Information: Dimensions
* humanoid_dwl (coming soon)
=== 2. PPO Policy ===
* Training Command:
<code>
python humanoid/scripts/train.py --task=humanoid_ppo --load_run log_file_path --name run_name
</code>
* Running a Trained Policy:
<code>
python humanoid/scripts/play.py --task=humanoid_ppo --load_run log_file_path --name run_name
</code>
By default, the latest model of the last run from the experiment folder is loaded. Other run iterations/models can be selected by adjusting load_run and checkpoint in the training config.
=== 3. Sim-to-sim ===
Before initiating the sim-to-sim process, ensure that play.py is run to export a JIT policy.
Mujoco-based Sim2Sim Deployment:
<code>
python scripts/sim2sim.py --load_model /path/to/export/model.pt
</code>
=== 4. Parameters ===
* CPU and GPU Usage: To run simulations on the CPU, set both --sim_device=cpu and --rl_device=cpu. For GPU operations, specify --sim_device=cuda:{0,1,2...} and --rl_device={0,1,2...} accordingly. Note that CUDA_VISIBLE_DEVICES is not applicable, and it's essential to match the --sim_device and --rl_device settings.
* Headless Operation: Include --headless for operations without rendering.
* Rendering Control: Press 'v' to toggle rendering during training.
* Policy Location: Trained policies are saved in humanoid/logs/<experiment_name>/<date_time>_<run_name>/model_<iteration>.pt.
=== 5. Command-Line Arguments ===
For RL training, refer to humanoid/utils/helpers.py#L161. For the sim-to-sim process, refer to humanoid/scripts/sim2sim.py#L169.
== Code Structure ==
Every environment hinges on an env file (legged_robot.py) and a configuration file (legged_robot_config.py). The latter houses two classes: LeggedRobotCfg (encompassing all environmental parameters) and LeggedRobotCfgPPO (denoting all training parameters). Both env and config classes use inheritance. Non-zero reward scales specified in cfg contribute a function of the corresponding name to the sum-total reward.
Tasks must be registered with task_registry.register(name, EnvClass, EnvConfig, TrainConfig). Registration may occur within envs/__init__.py, or outside of this repository.
=== Add a new environment ===
The base environment legged_robot constructs a rough terrain locomotion task. The corresponding configuration does not specify a robot asset (URDF/MJCF) and no reward scales.
If adding a new environment, create a new folder in the envs/ directory with a configuration file named <your_env>_config.py. The new configuration should inherit from existing environment configurations.
=== If proposing a new robot ===
* Insert the corresponding assets in the resources/ folder.
* In the cfg file, set the path to the asset, define body names, default_joint_positions, and PD gains. Specify the desired train_cfg and the environment's name (python class).
* In the train_cfg, set the experiment_name and run_name.
* If needed, create your environment in <your_env>.py. Inherit from existing environments, override desired functions and/or add your reward functions.
* Register your environment in humanoid/envs/__init__.py.
* Modify or tune other parameters in your cfg or cfg_train as per requirements. To remove the reward, set its scale to zero. Avoid modifying the parameters of other environments!
If a new robot/environment needs to perform sim2sim, modifications to humanoid/scripts/sim2sim.py may be required:
* Check the joint mapping of the robot between MJCF and URDF.
* Change the initial joint position of the robot according to the trained policy.
=== Troubleshooting ===
# ImportError: libpython3.8.so.1.0: cannot open shared object file: No such file or directory
<code>
export LD_LIBRARY_PATH="~/miniconda3/envs/your_env/lib:$LD_LIBRARY_PATH"
</code>
or
<code>
sudo apt install libpython3.8
</code>
# AttributeError: module 'distutils' has no attribute 'version'
<code>
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
</code>
# ImportError: /home/roboterax/anaconda3/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.20` not found (required by /home/roboterax/carbgym/python/isaacgym/_bindings/linux64/gym_36.so)
<code>
mkdir ${YOUR_CONDA_ENV}/lib/_unused
mv ${YOUR_CONDA_ENV}/lib/libstdc++* ${YOUR_CONDA_ENV}/lib/_unused
</code>
== Citation ==
Please cite the following if you use this code or parts of it:
<code>
@article{gu2024humanoid,
title={Humanoid-Gym: Reinforcement Learning for Humanoid Robot with Zero-Shot Sim2Real Transfer},
author={Gu, Xinyang and Wang, Yen-Jen and Chen, Jianyu},
journal={arXiv preprint arXiv:2404.05695},
year={2024}
}
</code>
== Acknowledgment ==
The implementation of Humanoid-Gym relies on resources from legged_gym and rsl_rl projects, created by the Robotic Systems Lab. The LeggedRobot implementation from their research is specifically utilized to enhance this codebase.
e8b8c9faf0382ce4c4a66a5976f7abbf6098e1d0
Mujoco WASM Build From Source
0
398
1728
2024-07-20T21:50:50Z
Vrtnis
21
Created page with "== Suggestions to Build MuJoCo WASM with Release 3.1.6 == 1. '''Clone the MuJoCo Repository:''' <syntaxhighlight lang="sh"> git clone --branch 3.1.6 https://github.com..."
wikitext
text/x-wiki
== Suggestions to Build MuJoCo WASM with Release 3.1.6 ==
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
8. '''Install the Build:'''
<syntaxhighlight lang="sh">
cmake --install .
sudo cmake --install .
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
e9212d658914d8a6d6ba3f289c556c3350ec1c49
1729
1728
2024-07-20T21:52:34Z
Vrtnis
21
wikitext
text/x-wiki
== Suggestions to Build MuJoCo WASM with Release 3.1.6 ==
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build.
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
8. '''Install the Build:'''
<syntaxhighlight lang="sh">
cmake --install .
sudo cmake --install .
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
44278d2a4c04b6fca2eb34f9e7f28926dcb3af08
1730
1729
2024-07-21T00:22:47Z
Vrtnis
21
wikitext
text/x-wiki
== Suggestions to Build MuJoCo WASM with Release 3.1.6 ==
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build.
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
8. '''Install the Build:'''
<syntaxhighlight lang="sh">
cmake --install .
sudo cmake --install .
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
d285a6d719a2065bb03898c9690ec249c083a11e
1731
1730
2024-07-21T00:32:45Z
Vrtnis
21
wikitext
text/x-wiki
== Suggestions to Build MuJoCo WASM with Release 3.1.6 ==
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/main...vrtnis:mujoco:release-3.1.6
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
8. '''Install the Build:'''
<syntaxhighlight lang="sh">
cmake --install .
sudo cmake --install .
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
48f845555806992ac0fce3fe1af85762267e1492
1732
1731
2024-07-21T00:36:14Z
Vrtnis
21
wikitext
text/x-wiki
== Suggestions to Build MuJoCo WASM with Release 3.1.6 ==
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/main...vrtnis:mujoco:release-3.1.6
You probably have to disable several features of 3.1.6 to allow for a successful WASM build.
For example in CMakeLists.txt:
Build Options Modified
The options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF).
Emscripten Setup Added
New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output.
Target Library Links Updated
Some target link options have been adjusted, including removing the lodepng library from the target link list.
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
8. '''Install the Build:'''
<syntaxhighlight lang="sh">
cmake --install .
sudo cmake --install .
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
cd252c5ede3f147aa94a694e255024daa9b84b64
1733
1732
2024-07-21T00:36:53Z
Vrtnis
21
wikitext
text/x-wiki
== Suggestions to Build MuJoCo WASM with Release 3.1.6 ==
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/main...vrtnis:mujoco:release-3.1.6
You probably have to disable several features of 3.1.6 to allow for a successful WASM build.
For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output.
Target Library Links Updated
Some target link options have been adjusted, including removing the lodepng library from the target link list.
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
8. '''Install the Build:'''
<syntaxhighlight lang="sh">
cmake --install .
sudo cmake --install .
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
055b6a12ee90709c9c28d4c96b78e804343c3cba
1734
1733
2024-07-21T00:37:21Z
Vrtnis
21
wikitext
text/x-wiki
== Suggestions to Build MuJoCo WASM with Release 3.1.6 ==
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/main...vrtnis:mujoco:release-3.1.6
You probably have to disable several features of 3.1.6 to allow for a successful WASM build.
For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
8. '''Install the Build:'''
<syntaxhighlight lang="sh">
cmake --install .
sudo cmake --install .
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
7d5424b828be3f8791d4885d98244d526799988b
1735
1734
2024-07-21T00:39:25Z
Vrtnis
21
wikitext
text/x-wiki
== Suggestions to Build MuJoCo WASM with Release 3.1.6 ==
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/main...vrtnis:mujoco:release-3.1.6
You probably have to disable several features of 3.1.6 to allow for a successful WASM build.
For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
Also, we'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
8. '''Install the Build:'''
<syntaxhighlight lang="sh">
cmake --install .
sudo cmake --install .
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
88bfd2bb9f05e519a470e9ee9ca178398b27cbb2
1736
1735
2024-07-21T00:39:46Z
Vrtnis
21
wikitext
text/x-wiki
== Suggestions to Build MuJoCo WASM with Release 3.1.6 ==
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/main...vrtnis:mujoco:release-3.1.6
You probably have to disable several features of 3.1.6 to allow for a successful WASM build.
For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
We'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
8. '''Install the Build:'''
<syntaxhighlight lang="sh">
cmake --install .
sudo cmake --install .
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
900afaf152c5fb7cd51bc318f0025143bd8e90e5
1737
1736
2024-07-21T00:44:09Z
Vrtnis
21
wikitext
text/x-wiki
== Suggestions to Build MuJoCo WASM with Release 3.1.6 ==
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/main...vrtnis:mujoco:release-3.1.6
You probably have to disable several features of 3.1.6 to allow for a successful WASM build.
For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
We'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older 2.3.1 build but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
8. '''Install the Build:'''
<syntaxhighlight lang="sh">
cmake --install .
sudo cmake --install .
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
a0244e7460286d4295fd24b30585718dec15f63a
File:Screenshot18-10-40.png
6
399
1738
2024-07-21T01:16:12Z
Vrtnis
21
Terminal output of wasm build
wikitext
text/x-wiki
== Summary ==
Terminal output of wasm build
7030c3f83e987c7211e1dc67562e01f6b1fa6e82
Mujoco WASM Build From Source
0
398
1739
1737
2024-07-21T01:18:43Z
Vrtnis
21
/*Add terminal output image*/
wikitext
text/x-wiki
== Suggestions to Build MuJoCo WASM with Release 3.1.6 ==
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/main...vrtnis:mujoco:release-3.1.6
You probably have to disable several features of 3.1.6 to allow for a successful WASM build.
For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
We'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older 2.3.1 build but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
[[File:Screenshot18-10-40.png|WASM Build Terminal Output]]
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
361a859a694e9c897ddf922f7c6999c72e286ae8
1741
1739
2024-07-21T01:21:06Z
Vrtnis
21
wikitext
text/x-wiki
== Suggestions to Build MuJoCo WASM with Release 3.1.6 ==
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/main...vrtnis:mujoco:release-3.1.6
You probably have to disable several features of 3.1.6 to allow for a successful WASM build.
For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
We'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older 2.3.1 build but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
[[File:Screenshot18-10-40.png|WASM Build Terminal Output]]
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
[[File:Screenshot18-10-07.png|800px|WASM Build Running In Browser]]
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
0551da456f4d57cfdeaaac83e06b4aad3703e4b9
1742
1741
2024-07-21T01:22:08Z
Vrtnis
21
wikitext
text/x-wiki
== Suggestions to Build MuJoCo WASM with Release 3.1.6 ==
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/main...vrtnis:mujoco:release-3.1.6
As of this writing, you'd need have to disable several features of 3.1.6 to allow for a successful WASM build.
For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
We'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older 2.3.1 build but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
[[File:Screenshot18-10-40.png|WASM Build Terminal Output]]
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
[[File:Screenshot18-10-07.png|800px|WASM Build Running In Browser]]
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
b5322b45282b1f395c3703d4ac5a5b1e14e148eb
1743
1742
2024-07-21T01:23:05Z
Vrtnis
21
wikitext
text/x-wiki
==== Suggestions to Build MuJoCo WASM with Release 3.1.6 ====
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/main...vrtnis:mujoco:release-3.1.6
As of this writing, you'd need have to disable several features of 3.1.6 to allow for a successful WASM build.
For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
We'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older 2.3.1 build but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
[[File:Screenshot18-10-40.png|WASM Build Terminal Output]]
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
[[File:Screenshot18-10-07.png|800px|WASM Build Running In Browser]]
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
4ed767b7fe954c2617e6b8275b54706fe811367b
1745
1743
2024-07-21T01:29:43Z
Vrtnis
21
/*Add bin folder screenshot*/
wikitext
text/x-wiki
==== Suggestions to Build MuJoCo WASM with Release 3.1.6 ====
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/main...vrtnis:mujoco:release-3.1.6
As of this writing, you'd need have to disable several features of 3.1.6 to allow for a successful WASM build.
For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
We'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older 2.3.1 build but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
[[File:Screenshot18-10-40.png|WASM Build Terminal Output]]
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
[[File:Screenshot18-10-07.png|800px|WASM Build Running In Browser]]
[[File:Screenshot18-24-43.png|800px|WASM Build bin folder]]
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
c1473f438dc492997ec1ef2832a820ea8dbcad5b
1746
1745
2024-07-21T02:09:09Z
Vrtnis
21
wikitext
text/x-wiki
==== Suggestions to Build MuJoCo WASM with Release 3.1.6 ====
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/3.1.6...vrtnis:mujoco:release-3.1.6
As of this writing, you'd need have to disable several features of 3.1.6 to allow for a successful WASM build.
For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
We'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older 2.3.1 build but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
[[File:Screenshot18-10-40.png|WASM Build Terminal Output]]
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
[[File:Screenshot18-10-07.png|800px|WASM Build Running In Browser]]
[[File:Screenshot18-24-43.png|800px|WASM Build bin folder]]
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
04527b74dc62eee60f06658cc0ba0b83ab21d53b
1748
1746
2024-07-22T21:05:16Z
Vrtnis
21
wikitext
text/x-wiki
==== Suggestions to Build MuJoCo WASM with Release 3.1.6 ====
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/3.1.6...vrtnis:mujoco:release-3.1.6
You could also run git show 5c4b86b b35d837 85f7539 6f173ea 4142aa5 7e2fb2f b0e4341 a4b173e ad49419 21e4e6c c18e6f5 836bbd9 534e43e 0b75f5f > changes.txt
As of this writing, you'd need have to disable several features of 3.1.6 to allow for a successful WASM build.
For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
We'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older 2.3.1 build but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
[[File:Screenshot18-10-40.png|WASM Build Terminal Output]]
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
[[File:Screenshot18-10-07.png|800px|WASM Build Running In Browser]]
[[File:Screenshot18-24-43.png|800px|WASM Build bin folder]]
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
f04c7bafd9bf4bc147b8692bca884d8d71341d68
1749
1748
2024-07-22T21:06:12Z
Vrtnis
21
wikitext
text/x-wiki
==== Suggestions to Build MuJoCo WASM with Release 3.1.6 ====
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build using an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/3.1.6...vrtnis:mujoco:release-3.1.6
You could also run git show 5c4b86b b35d837 85f7539 6f173ea 4142aa5 7e2fb2f b0e4341 a4b173e ad49419 21e4e6c c18e6f5 836bbd9 534e43e 0b75f5f > changes.txt
As of this writing, you'd need have to disable several features of 3.1.6 to allow for a successful WASM build.
For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
We'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older 2.3.1 build but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
[[File:Screenshot18-10-40.png|WASM Build Terminal Output]]
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
[[File:Screenshot18-10-07.png|800px|WASM Build Running In Browser]]
[[File:Screenshot18-24-43.png|800px|WASM Build bin folder]]
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
39fae87acf8b07878a094751cec403db020899da
1750
1749
2024-07-22T21:26:54Z
Vrtnis
21
wikitext
text/x-wiki
==== Suggestions to Build MuJoCo WASM with Release 3.1.6 ====
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build using an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/3.1.6...vrtnis:mujoco:release-3.1.6
You could also run git show to see specific changes that have been made to the original mujoco 3.1.6. files e.g. ,5c4b86b > changes.txt
As of this writing, you'd need have to disable several features of 3.1.6 to allow for a successful WASM build.
For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
We'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older 2.3.1 build but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
[[File:Screenshot18-10-40.png|WASM Build Terminal Output]]
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
[[File:Screenshot18-10-07.png|800px|WASM Build Running In Browser]]
[[File:Screenshot18-24-43.png|800px|WASM Build bin folder]]
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
394fa39d59c2fdbbba6a0fc400b2c6f5cbf4dbe1
1751
1750
2024-07-22T21:32:34Z
Vrtnis
21
wikitext
text/x-wiki
==== Suggestions to Build MuJoCo WASM with Release 3.1.6 ====
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build using an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/3.1.6...vrtnis:mujoco:release-3.1.6
You could also run git show to see specific changes that have been made to the original mujoco 3.1.6. files e.g. ,5c4b86b > changes.txt
As of this writing, you'd need have to disable several features of 3.1.6 to allow for a successful WASM build.
* For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
*
* In src/user/user_objects.cc comment out the line that includes lodepng.h. Replace the bodies of the mjCHField::LoadPNG and mjCTexture::LoadPNG functions with a single return statement.
*
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
We'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older 2.3.1 build but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
[[File:Screenshot18-10-40.png|WASM Build Terminal Output]]
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
[[File:Screenshot18-10-07.png|800px|WASM Build Running In Browser]]
[[File:Screenshot18-24-43.png|800px|WASM Build bin folder]]
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
9d8af48b64c7d5b97fc029c3e1cdec0ceb406929
1752
1751
2024-07-22T21:33:56Z
Vrtnis
21
wikitext
text/x-wiki
==== Suggestions to Build MuJoCo WASM with Release 3.1.6 ====
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build using an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/3.1.6...vrtnis:mujoco:release-3.1.6
You could also run git show to see specific changes that have been made to the original mujoco 3.1.6. files e.g. ,5c4b86b > changes.txt
As of this writing, you'd need have to disable several features of 3.1.6 to allow for a successful WASM build.
* For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
*
* In src/user/user_objects.cc comment out the line that includes lodepng.h. Replace the bodies of the mjCHField::LoadPNG and mjCTexture::LoadPNG functions with a single return statement.
*
* In src/engine/engine_util_errmem.c update the preprocessor condition in the mju_writeLog function by replacing __STDC_VERSION_TIME_H__ with __EMSCRIPTEN__ in the #if directive. The line should now include __EMSCRIPTEN__ in the condition.
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
We'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older 2.3.1 build but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
[[File:Screenshot18-10-40.png|WASM Build Terminal Output]]
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
[[File:Screenshot18-10-07.png|800px|WASM Build Running In Browser]]
[[File:Screenshot18-24-43.png|800px|WASM Build bin folder]]
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
55099cde94f01b807e8d29bb5e33f9b1cd8ccba5
1753
1752
2024-07-22T21:35:22Z
Vrtnis
21
wikitext
text/x-wiki
==== Suggestions to Build MuJoCo WASM with Release 3.1.6 ====
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build using an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/3.1.6...vrtnis:mujoco:release-3.1.6
You could also run git show to see specific changes that have been made to the original mujoco 3.1.6. files e.g. ,5c4b86b > changes.txt
As of this writing, you'd need have to disable several features of 3.1.6 to allow for a successful WASM build.
* For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
*
* In src/user/user_objects.cc comment out the line that includes lodepng.h. Replace the bodies of the mjCHField::LoadPNG and mjCTexture::LoadPNG functions with a single return statement.
*
* In src/engine/engine_util_errmem.c update the preprocessor condition in the mju_writeLog function by replacing __STDC_VERSION_TIME_H__ with __EMSCRIPTEN__ in the #if directive. The line should now include __EMSCRIPTEN__ in the condition.
*
* In src/engine/engine_crossplatform.h add a conditional block specifically for Emscripten within the preprocessor directive in the engine_crossplatform.h file. Inside this block, include the sort_r.h header and define the mjQUICKSORT and quicksortfunc macros appropriately. This will separate the handling for Apple, Emscripten, and other platforms.
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
We'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older 2.3.1 build but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
[[File:Screenshot18-10-40.png|WASM Build Terminal Output]]
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
[[File:Screenshot18-10-07.png|800px|WASM Build Running In Browser]]
[[File:Screenshot18-24-43.png|800px|WASM Build bin folder]]
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
776773235a1eb70a91503afc1e91cefbb44ae116
1754
1753
2024-07-22T21:36:19Z
Vrtnis
21
wikitext
text/x-wiki
==== Suggestions to Build MuJoCo WASM with Release 3.1.6 ====
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build using an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/3.1.6...vrtnis:mujoco:release-3.1.6
You could also run git show to see specific changes that have been made to the original mujoco 3.1.6. files e.g. ,5c4b86b > changes.txt
==== Specific ideas for changes ====
As of this writing, you'd need have to disable several features of 3.1.6 to allow for a successful WASM build.
* For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
*
* In src/user/user_objects.cc comment out the line that includes lodepng.h. Replace the bodies of the mjCHField::LoadPNG and mjCTexture::LoadPNG functions with a single return statement.
*
* In src/engine/engine_util_errmem.c update the preprocessor condition in the mju_writeLog function by replacing __STDC_VERSION_TIME_H__ with __EMSCRIPTEN__ in the #if directive. The line should now include __EMSCRIPTEN__ in the condition.
*
* In src/engine/engine_crossplatform.h add a conditional block specifically for Emscripten within the preprocessor directive in the engine_crossplatform.h file. Inside this block, include the sort_r.h header and define the mjQUICKSORT and quicksortfunc macros appropriately. This will separate the handling for Apple, Emscripten, and other platforms.
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
We'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older 2.3.1 build but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
[[File:Screenshot18-10-40.png|WASM Build Terminal Output]]
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
[[File:Screenshot18-10-07.png|800px|WASM Build Running In Browser]]
[[File:Screenshot18-24-43.png|800px|WASM Build bin folder]]
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
35deb54a2dddb4c4fe1a0ac44e3d52eedec1efe2
1755
1754
2024-07-22T21:37:43Z
Vrtnis
21
wikitext
text/x-wiki
==== Suggestions to Build MuJoCo WASM with Release 3.1.6 ====
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build using an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/3.1.6...vrtnis:mujoco:release-3.1.6
You could also run git show to see specific changes that have been made to the original mujoco 3.1.6. files e.g. ,5c4b86b > changes.txt
==== Specific ideas for changes ====
As of this writing, you'd need have to disable several features of 3.1.6 to allow for a successful WASM build.
* For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
*
* In src/user/user_objects.cc comment out the line that includes lodepng.h. Replace the bodies of the mjCHField::LoadPNG and mjCTexture::LoadPNG functions with a single return statement.
*
* In src/engine/engine_util_errmem.c update the preprocessor condition in the mju_writeLog function by replacing __STDC_VERSION_TIME_H__ with __EMSCRIPTEN__ in the #if directive. The line should now include __EMSCRIPTEN__ in the condition.
*
* In src/engine/engine_crossplatform.h add a conditional block specifically for Emscripten within the preprocessor directive in the engine_crossplatform.h file. Inside this block, include the sort_r.h header and define the mjQUICKSORT and quicksortfunc macros appropriately. This will separate the handling for Apple, Emscripten, and other platforms.
*
* In the cmake/MujocoOptions.cmake file, remove the -Wno-int-in-bool-context compiler warning flag from the list of warnings.
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
We'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older 2.3.1 build but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
[[File:Screenshot18-10-40.png|WASM Build Terminal Output]]
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
[[File:Screenshot18-10-07.png|800px|WASM Build Running In Browser]]
[[File:Screenshot18-24-43.png|800px|WASM Build bin folder]]
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
b2df887061079878f8bd4bcb429d72b61b672953
1756
1755
2024-07-22T21:39:08Z
Vrtnis
21
wikitext
text/x-wiki
==== Suggestions to Build MuJoCo WASM with Release 3.1.6 ====
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build using an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/3.1.6...vrtnis:mujoco:release-3.1.6
You could also run git show to see specific changes that have been made to the original mujoco 3.1.6. files e.g. ,5c4b86b > changes.txt
==== Specific ideas for changes ====
As of this writing, you'd need have to disable several features of 3.1.6 to allow for a successful WASM build.
* For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
*
* In src/user/user_objects.cc comment out the line that includes lodepng.h. Replace the bodies of the mjCHField::LoadPNG and mjCTexture::LoadPNG functions with a single return statement.
*
* In src/engine/engine_util_errmem.c update the preprocessor condition in the mju_writeLog function by replacing __STDC_VERSION_TIME_H__ with __EMSCRIPTEN__ in the #if directive. The line should now include __EMSCRIPTEN__ in the condition.
*
* In src/engine/engine_crossplatform.h add a conditional block specifically for Emscripten within the preprocessor directive in the engine_crossplatform.h file. Inside this block, include the sort_r.h header and define the mjQUICKSORT and quicksortfunc macros appropriately. This will separate the handling for Apple, Emscripten, and other platforms.
*
* In the cmake/MujocoOptions.cmake file, remove the -Wno-int-in-bool-context compiler warning flag from the list of warnings.
*
* In the CMakeLists.txt file, make the following changes:
Change the default values of the options to disable the building of examples, simulate library, tests, and Python utility libraries by setting them to OFF:
Set MUJOCO_BUILD_EXAMPLES to OFF.
Set MUJOCO_BUILD_SIMULATE to OFF.
Set MUJOCO_BUILD_TESTS to OFF.
Set MUJOCO_TEST_PYTHON_UTIL to OFF.
Remove the lodepng library from the target_link_libraries list for the mujoco target.
Also, for example in mjxmacro.h you have to involve adding explicit casting to size_t for the calculations of key_mpos and key_mquat array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
We'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older 2.3.1 build but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
[[File:Screenshot18-10-40.png|WASM Build Terminal Output]]
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
[[File:Screenshot18-10-07.png|800px|WASM Build Running In Browser]]
[[File:Screenshot18-24-43.png|800px|WASM Build bin folder]]
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
6d3e1c65f1322f6abcd4cb6e68cb975fbac6b3b2
1759
1756
2024-07-22T21:43:58Z
Vrtnis
21
wikitext
text/x-wiki
==== Suggestions to Build MuJoCo WASM with Release 3.1.6 ====
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build using an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/3.1.6...vrtnis:mujoco:release-3.1.6
You could also run git show to see specific changes that have been made to the original mujoco 3.1.6. files e.g. ,5c4b86b > changes.txt
==== Specific ideas for changes ====
As of this writing, you'd need to disable several features of 3.1.6 to allow for a successful WASM build.
For example in <pre>CMakeLists.txt</pre>, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
In `src/user/user_objects.cc`, comment out the line that includes `lodepng.h`. Replace the bodies of the `mjCHField::LoadPNG` and `mjCTexture::LoadPNG` functions with a single return statement.
In `src/engine/engine_util_errmem.c`, update the preprocessor condition in the `mju_writeLog` function by replacing `__STDC_VERSION_TIME_H__` with `__EMSCRIPTEN__` in the `#if` directive. The line should now include `__EMSCRIPTEN__` in the condition.
In `src/engine/engine_crossplatform.h`, add a conditional block specifically for Emscripten within the preprocessor directive. Inside this block, include the `sort_r.h` header and define the `mjQUICKSORT` and `quicksortfunc` macros appropriately. This will separate the handling for Apple, Emscripten, and other platforms.
In the `cmake/MujocoOptions.cmake` file, remove the `-Wno-int-in-bool-context` compiler warning flag from the list of warnings.
In the `CMakeLists.txt` file, make the following changes:
* Change the default values of the options to disable the building of examples, simulate library, tests, and Python utility libraries by setting them to OFF:
* Set `MUJOCO_BUILD_EXAMPLES` to OFF.
* Set `MUJOCO_BUILD_SIMULATE` to OFF.
* Set `MUJOCO_BUILD_TESTS` to OFF.
* Set `MUJOCO_TEST_PYTHON_UTIL` to OFF.
* Remove the `lodepng` library from the `target_link_libraries` list for the `mujoco` target.
Also, for example, in `mjxmacro.h`, add explicit casting to `size_t` for the calculations of `key_mpos` and `key_mquat` array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
We'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older 2.3.1 build but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
[[File:Screenshot18-10-40.png|WASM Build Terminal Output]]
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
[[File:Screenshot18-10-07.png|800px|WASM Build Running In Browser]]
[[File:Screenshot18-24-43.png|800px|WASM Build bin folder]]
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
09e555c636883bc86d5a4407b89c693fe05b7baf
1760
1759
2024-07-22T21:44:13Z
Vrtnis
21
/* Specific ideas for changes */
wikitext
text/x-wiki
==== Suggestions to Build MuJoCo WASM with Release 3.1.6 ====
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build using an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/3.1.6...vrtnis:mujoco:release-3.1.6
You could also run git show to see specific changes that have been made to the original mujoco 3.1.6. files e.g. ,5c4b86b > changes.txt
==== Specific ideas for changes ====
As of this writing, you'd need to disable several features of 3.1.6 to allow for a successful WASM build.
For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
In `src/user/user_objects.cc`, comment out the line that includes `lodepng.h`. Replace the bodies of the `mjCHField::LoadPNG` and `mjCTexture::LoadPNG` functions with a single return statement.
In `src/engine/engine_util_errmem.c`, update the preprocessor condition in the `mju_writeLog` function by replacing `__STDC_VERSION_TIME_H__` with `__EMSCRIPTEN__` in the `#if` directive. The line should now include `__EMSCRIPTEN__` in the condition.
In `src/engine/engine_crossplatform.h`, add a conditional block specifically for Emscripten within the preprocessor directive. Inside this block, include the `sort_r.h` header and define the `mjQUICKSORT` and `quicksortfunc` macros appropriately. This will separate the handling for Apple, Emscripten, and other platforms.
In the `cmake/MujocoOptions.cmake` file, remove the `-Wno-int-in-bool-context` compiler warning flag from the list of warnings.
In the `CMakeLists.txt` file, make the following changes:
* Change the default values of the options to disable the building of examples, simulate library, tests, and Python utility libraries by setting them to OFF:
* Set `MUJOCO_BUILD_EXAMPLES` to OFF.
* Set `MUJOCO_BUILD_SIMULATE` to OFF.
* Set `MUJOCO_BUILD_TESTS` to OFF.
* Set `MUJOCO_TEST_PYTHON_UTIL` to OFF.
* Remove the `lodepng` library from the `target_link_libraries` list for the `mujoco` target.
Also, for example, in `mjxmacro.h`, add explicit casting to `size_t` for the calculations of `key_mpos` and `key_mquat` array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
We'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older 2.3.1 build but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
[[File:Screenshot18-10-40.png|WASM Build Terminal Output]]
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
[[File:Screenshot18-10-07.png|800px|WASM Build Running In Browser]]
[[File:Screenshot18-24-43.png|800px|WASM Build bin folder]]
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
388c82fdc08cd136fe99f4c91b6aab897fc1a003
1765
1760
2024-07-22T22:29:29Z
Vrtnis
21
wikitext
text/x-wiki
==== Suggestions to Build MuJoCo WASM with Release 3.1.6 ====
Note: These instructions relate to a WASM build MuJoCo from the DeepMind MuJoCo source on Github. If you want to build using an existing WASM port of MuJoCo 2.3.1 check out [[MuJoCo_WASM]]
1. '''Clone the MuJoCo Repository:'''
<syntaxhighlight lang="sh">
git clone --branch 3.1.6 https://github.com/deepmind/mujoco.git
cd mujoco
</syntaxhighlight>
2. '''Clone the Emscripten SDK Repository:'''
<syntaxhighlight lang="sh">
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
</syntaxhighlight>
3. '''Install and Activate Emscripten:'''
<syntaxhighlight lang="sh">
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ..
</syntaxhighlight>
Proceed with the next steps once you've made suggested changes to your codebase to allow for a WASM build. You can see the changes here as a reference https://github.com/google-deepmind/mujoco/compare/3.1.6...vrtnis:mujoco:release-3.1.6
You could also run git show to see specific changes that have been made to the original mujoco 3.1.6. files e.g. ,5c4b86b > changes.txt
==== Specific ideas for changes ====
As of this writing, you'd need to disable several features of 3.1.6 to allow for a successful WASM build.
For example in CMakeLists.txt, the options to build examples, simulate library, and tests for MuJoCo have been turned off by default (changed from ON to OFF). New configurations and target properties for building MuJoCo with Emscripten have been added. This includes defining source files, checking their existence, and setting specific properties and options for building WebAssembly (.wasm) and HTML output. Some target link options have been adjusted, including removing the lodepng library from the target link list.
* `src/user/user_objects.cc`, comment out the line that includes `lodepng.h`. Replace the bodies of the `mjCHField::LoadPNG` and `mjCTexture::LoadPNG` functions with a single return statement.
*
* `src/engine/engine_util_errmem.c`, update the preprocessor condition in the `mju_writeLog` function by replacing `__STDC_VERSION_TIME_H__` with `__EMSCRIPTEN__` in the `#if` directive. The line should now include `__EMSCRIPTEN__` in the condition.
*
* `src/engine/engine_crossplatform.h`, add a conditional block specifically for Emscripten within the preprocessor directive. Inside this block, include the `sort_r.h` header and define the `mjQUICKSORT` and `quicksortfunc` macros appropriately. This will separate the handling for Apple, Emscripten, and other platforms.
*
* `cmake/MujocoOptions.cmake` file, remove the `-Wno-int-in-bool-context` compiler warning flag from the list of warnings.
In the `CMakeLists.txt` file, make the following changes:
* Change the default values of the options to disable the building of examples, simulate library, tests, and Python utility libraries by setting them to OFF:
* Set `MUJOCO_BUILD_EXAMPLES` to OFF.
* Set `MUJOCO_BUILD_SIMULATE` to OFF.
* Set `MUJOCO_BUILD_TESTS` to OFF.
* Set `MUJOCO_TEST_PYTHON_UTIL` to OFF.
* Remove the `lodepng` library from the `target_link_libraries` list for the `mujoco` target.
Also, for example, in `mjxmacro.h`, add explicit casting to `size_t` for the calculations of `key_mpos` and `key_mquat` array sizes, ensuring correct memory allocation and preventing potential integer overflow issues.
We'd suggest taking a look at https://github.com/stillonearth/MuJoCo-WASM/issues/1 (older 2.3.1 build but still relevant)
4. '''Prepare the Build Environment:'''
<syntaxhighlight lang="sh">
mkdir build
cd build
</syntaxhighlight>
5. '''Run Emscripten CMake Commands:'''
<syntaxhighlight lang="sh">
emcmake cmake ..
emmake make
</syntaxhighlight>
[[File:Screenshot18-10-40.png|WASM Build Terminal Output]]
6. '''Deploy and Run Locally:'''
<syntaxhighlight lang="sh">
emrun --no_browser --port 8080 .
</syntaxhighlight>
[[File:Screenshot18-10-07.png|800px|WASM Build Running In Browser]]
[[File:Screenshot18-24-43.png|800px|WASM Build bin folder]]
7. '''Optional Cleanup and Repeat Steps if Necessary:'''
<syntaxhighlight lang="sh">
rm -rf *
emcmake cmake ..
emmake make
</syntaxhighlight>
=== Notes ===
* Ensure that the Emscripten environment is correctly activated before starting the build process.
* Regularly clean the build directory to maintain a clean build environment.
abf1626a092569dca4a43dcb5bc49a0dce9e5993
File:Screenshot18-10-07.png
6
400
1740
2024-07-21T01:19:23Z
Vrtnis
21
Emscripten output
wikitext
text/x-wiki
== Summary ==
Emscripten output
63f1db9e5527b61b33cae69772dc3ca23d3fc5c7
File:Screenshot18-24-43.png
6
401
1744
2024-07-21T01:25:40Z
Vrtnis
21
bin folder after successful wasm build
wikitext
text/x-wiki
== Summary ==
bin folder after successful wasm build
675f4a9e7693866fbc7268c5e6356bd69fb3ab8e
K-Scale Lecture Circuit
0
299
1747
1722
2024-07-22T15:29:52Z
Ben
2
wikitext
text/x-wiki
{| class="wikitable"
|-
! Date
! Presenter
! Topic
|-
| 2024.07.21
| Wesley
| Consistency Models
|-
| 2024.07.20
| Nathan
| Flow Matching
|-
| 2024.07.17
| Dennis
| Diffusion Models
|-
| 2024.07.16
| Isaac
| Convolutional Neural Networks
|-
| 2024.07.15
| Allen
| RNNs, LSTMs, GRUs
|-
| 2024.06.28
| Kenji
| Principles of Power Electronics
|-
| 2024.06.27
| Paweł
| OpenVLA
|-
| 2024.06.26
| Ryan
| Introduction to KiCAD
|-
| 2024.06.24
| Dennis
| Live Streaming Protocols
|-
| 2024.06.21
| Nathan
| Quantization
|-
| 2024.06.20
| Timothy
| Diffusion
|-
| 2024.06.19
| Allen
| Neural Network Loss Functions
|-
| 2024.06.18
| Ben
| Neural network inference on the edge
|-
| 2024.06.17
| Matt
| CAD software deep dive
|-
| 2024.06.14
| Allen
| [https://github.com/karpathy/llm.c llm.c]
|-
| 2024.06.13
| Kenji
| Engineering principles of BLDCs
|-
| 2024.06.12
| Ryan
| 3-phase motors
|-
| 2024.06.11
| Vedant
| CAN Protocol
|-
| 2024.06.10
| Isaac
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech Papers Round 2]
|-
| 2024.06.07
| Ben
| Quantization
|-
| 2024.06.06
| Tom
| Linux Raw
|-
| 2024.06.05
| Hugo
| Gaussian Splats
|-
| 2024.06.04
| Dennis
| [https://humanoids.wiki/w/Dennis%27_Speech_Project Speech representation learning papers]
|-
| 2024.06.03
| Paweł
| What I (want to) believe in
|-
| 2024.05.30
| Isaac
| VLMs
|-
| 2024.05.29
| Allen
| PPO
|}
[[Category: K-Scale]]
68f2c7d55fe9e8c254e8d16432a5c061e78d5e05
File:KawaiiStompySticker2.png
6
402
1757
2024-07-22T21:42:29Z
Ben
2
wikitext
text/x-wiki
KawaiiStompySticker2
69972620ddc1080c0f3e76ed4ce90ae798fa45e1
Stompy
0
2
1758
1664
2024-07-22T21:42:39Z
Ben
2
/* Artwork */
wikitext
text/x-wiki
[[File:Stompy.jpg|right|300px|thumb|Stompy standing up]]
{{infobox robot
| name = Stompy
| organization = [[K-Scale Labs]]
| cost = USD 10,000
}}
Stompy is an open-source humanoid robot developed by [[K-Scale Labs]]. Here are some relevant links:
* [[Stompy To-Do List]]
* [[Stompy Build Guide]]
* [[Gripper History]]
= Hardware =
This page is dedicated to detailing the hardware selections for humanoid robots, including various components such as actuators, cameras, compute units, PCBs and modules, batteries, displays, microphones, speakers, as well as wiring and connectors.
== Actuators ==
Actuators are the components that allow the robot to move and interact with its environment. They convert energy into mechanical motion. Common types used in humanoid robots include:
* Servo motors
* Stepper motors
* Linear actuators
== Cameras ==
Cameras are essential for visual processing, allowing the robot to perceive its surroundings. Important considerations include:
* Resolution and frame rate
* Field of view
* Depth sensing capabilities (3D cameras)
== Compute ==
The compute section handles the processing requirements of the robot. This includes:
* Microprocessors and microcontrollers
* Single-board computers like Raspberry Pi or Nvidia Jetson
* Dedicated AI accelerators for machine learning tasks
== PCB and Modules ==
Printed Circuit Boards (PCBs) and the modules on them are the backbone of the robot's electronic system.
* Main control board
* Power management modules
* Sensor interfaces
* Communication modules (Wi-Fi, Bluetooth)
== Batteries ==
Batteries provide the necessary power to all robotic systems and are crucial for mobile autonomy. Selection factors include:
* Battery type (Li-Ion, NiMH, Lead-Acid)
* Capacity (measured in mAh or Ah)
* Voltage and energy density
* Safety features and durability
== Displays ==
Displays are used in robots for displaying information such as system status, data, and interactive elements. Key features include:
* Size variations ranging from small to large panels
* Touchscreen capabilities
* High resolution displays
== Microphones ==
Microphones enable the robot to receive and process audio inputs, crucial for voice commands and auditory data. Factors to consider are:
* Sensitivity and noise cancellation
* Directionality (omnidirectional vs. unidirectional)
* Integration with voice recognition software
== Speakers ==
Speakers allow the robot to communicate audibly with its environment, essential for interaction and alerts. Considerations include:
* Power output and sound quality
* Size and mounting options
* Compatibility with audio processing hardware
== Wiring and Connectors ==
Proper wiring and connectors ensure reliable communication and power supply throughout the robot's components.
* Types of wires (gauge, shielding)
* Connectors (pin types, waterproofing)
* Cable management solutions
=== Conventions ===
The images below show our pin convention for the CAN bus when using various connectors.
<gallery>
Kscale db9 can bus convention.jpg
Kscale phoenix can bus convention.jpg
</gallery>
= Simulation =
For the latest simulation artifacts, see [https://kscale.dev/ the website].
= Artwork =
Here's some art of Stompy!
<gallery>
Stompy 1.png
Stompy 2.png
Stompy 3.png
Stompy 4.png
KawaiiStompySticker2.png
</gallery>
[[Category:Robots]]
[[Category:Open Source]]
[[Category:K-Scale]]
5bd60361744d0590c8a07ffa6bd67befe837d581
MuJoCo Sim Use Case Notes
0
403
1761
2024-07-22T21:52:10Z
Vrtnis
21
Created page with "= Notes from user calls == - how robotics simulators have changed processes and improved workflow. - using MuJoCo for advanced simulations and engine design performance, ment..."
wikitext
text/x-wiki
= Notes from user calls ==
- how robotics simulators have changed processes and improved workflow.
- using MuJoCo for advanced simulations and engine design performance, mentioning issues with communicating these simulations to engineering and maintenance teams.
- open source simulation tools, such as creating a comprehensive simulation platform, exploring engine dynamics, and the ease of customization due to its open-source nature.
- example use case of MuJoCo providing self-serve exploration for maintenance teams, refining designs, developing predictive maintenance strategies, and reducing development costs.
- self-serve simulations work and their user-friendly design for maintenance personnel.
- customized KUKA robot and a visual inspection system using a cobot for automated documentation of engine parts.
- robotic polishing system
Source transcript at:
https://vrtnis.github.io/transcript-viewer/?jsonUrl=https://gist.githubusercontent.com/vrtnis/3a025a21cbe367368b01fe8b97d8d9d9/raw/1f2962a16f7b6439d36f8067c8f1657a72408682/pratt_1-1948273645.json
8b23643e23c746a1519e63b1fdf6678488081d9e
1762
1761
2024-07-22T21:52:29Z
Vrtnis
21
/* Notes from user calls = */
wikitext
text/x-wiki
==== Notes from user calls ====
- how robotics simulators have changed processes and improved workflow.
- using MuJoCo for advanced simulations and engine design performance, mentioning issues with communicating these simulations to engineering and maintenance teams.
- open source simulation tools, such as creating a comprehensive simulation platform, exploring engine dynamics, and the ease of customization due to its open-source nature.
- example use case of MuJoCo providing self-serve exploration for maintenance teams, refining designs, developing predictive maintenance strategies, and reducing development costs.
- self-serve simulations work and their user-friendly design for maintenance personnel.
- customized KUKA robot and a visual inspection system using a cobot for automated documentation of engine parts.
- robotic polishing system
Source transcript at:
https://vrtnis.github.io/transcript-viewer/?jsonUrl=https://gist.githubusercontent.com/vrtnis/3a025a21cbe367368b01fe8b97d8d9d9/raw/1f2962a16f7b6439d36f8067c8f1657a72408682/pratt_1-1948273645.json
c8d28c7ec53a33c7cc33325cd18d7efe6cc0563d
1763
1762
2024-07-22T21:52:43Z
Vrtnis
21
wikitext
text/x-wiki
==== Notes from user calls ====
- how robotics simulators have changed processes and improved workflow.
- using MuJoCo for advanced simulations and engine design performance, mentioning issues with communicating these simulations to engineering and maintenance teams.
- open source simulation tools, such as creating a comprehensive simulation platform, exploring engine dynamics, and the ease of customization due to its open-source nature.
- example use case of MuJoCo providing self-serve exploration for maintenance teams, refining designs, developing predictive maintenance strategies, and reducing development costs.
- self-serve simulations work and their user-friendly design for maintenance personnel.
- customized KUKA robot and a visual inspection system using a cobot for automated documentation of engine parts.
- robotic polishing system
Source transcript at:
https://vrtnis.github.io/transcript-viewer/?jsonUrl=https://gist.githubusercontent.com/vrtnis/3a025a21cbe367368b01fe8b97d8d9d9/raw/1f2962a16f7b6439d36f8067c8f1657a72408682/pratt_1-1948273645.json
1fe0f7d320c7520c7c114f349b19a2c37c31c3e5
1764
1763
2024-07-22T21:55:13Z
Vrtnis
21
wikitext
text/x-wiki
==== Notes from user calls ====
Work in progress...
- how robotics simulators have changed processes and improved workflow.
- using MuJoCo for advanced simulations and engine design performance, mentioning issues with communicating these simulations to engineering and maintenance teams.
- open source simulation tools, such as creating a comprehensive simulation platform, exploring engine dynamics, and the ease of customization due to its open-source nature.
- example use case of MuJoCo providing self-serve exploration for maintenance teams, refining designs, developing predictive maintenance strategies, and reducing development costs.
- self-serve simulations work and their user-friendly design for maintenance personnel.
- customized KUKA robot and a visual inspection system using a cobot for automated documentation of engine parts.
- robotic polishing system
Source transcript at:
https://vrtnis.github.io/transcript-viewer/?jsonUrl=https://gist.githubusercontent.com/vrtnis/3a025a21cbe367368b01fe8b97d8d9d9/raw/1f2962a16f7b6439d36f8067c8f1657a72408682/pratt_1-1948273645.json
dd32c242095403f3195056a7a388fcf0cad7df90
1767
1764
2024-07-22T22:52:34Z
Vrtnis
21
wikitext
text/x-wiki
==== Notes from user calls ====
Work in progress...
- how robotics simulators have changed processes and improved workflow.
- using MuJoCo for advanced simulations and engine design performance, mentioning issues with communicating these simulations to engineering and maintenance teams.
- open source simulation tools, such as creating a comprehensive simulation platform, exploring engine dynamics, and the ease of customization due to its open-source nature.
- example use case of MuJoCo providing self-serve exploration for maintenance teams, refining designs, developing predictive maintenance strategies, and reducing development costs.
- self-serve simulations work and their user-friendly design for maintenance personnel.
- customized KUKA robot and a visual inspection system using a cobot for automated documentation of engine parts.
- robotic polishing system
'''Source transcript at:
https://vrtnis.github.io/transcript-viewer/?jsonUrl=https://gist.githubusercontent.com/vrtnis/3a025a21cbe367368b01fe8b97d8d9d9/raw/1f2962a16f7b6439d36f8067c8f1657a72408682/pratt_1-1948273645.json'''
[[File:Screenshot15-50-46.png|900px]]
a8d22e1d994c3f123cb6574c8321dba4bc58e2f6
1768
1767
2024-07-22T22:52:54Z
Vrtnis
21
wikitext
text/x-wiki
==== Notes from user calls ====
''Work in progress...
''
- how robotics simulators have changed processes and improved workflow.
- using MuJoCo for advanced simulations and engine design performance, mentioning issues with communicating these simulations to engineering and maintenance teams.
- open source simulation tools, such as creating a comprehensive simulation platform, exploring engine dynamics, and the ease of customization due to its open-source nature.
- example use case of MuJoCo providing self-serve exploration for maintenance teams, refining designs, developing predictive maintenance strategies, and reducing development costs.
- self-serve simulations work and their user-friendly design for maintenance personnel.
- customized KUKA robot and a visual inspection system using a cobot for automated documentation of engine parts.
- robotic polishing system
'''Source transcript at:
https://vrtnis.github.io/transcript-viewer/?jsonUrl=https://gist.githubusercontent.com/vrtnis/3a025a21cbe367368b01fe8b97d8d9d9/raw/1f2962a16f7b6439d36f8067c8f1657a72408682/pratt_1-1948273645.json'''
[[File:Screenshot15-50-46.png|900px]]
42e1c8a590ab97806b660739fb62b5fed0626c70
1769
1768
2024-07-22T22:53:04Z
Vrtnis
21
wikitext
text/x-wiki
==== Notes from user calls ====
''Work in progress...''
- how robotics simulators have changed processes and improved workflow.
- using MuJoCo for advanced simulations and engine design performance, mentioning issues with communicating these simulations to engineering and maintenance teams.
- open source simulation tools, such as creating a comprehensive simulation platform, exploring engine dynamics, and the ease of customization due to its open-source nature.
- example use case of MuJoCo providing self-serve exploration for maintenance teams, refining designs, developing predictive maintenance strategies, and reducing development costs.
- self-serve simulations work and their user-friendly design for maintenance personnel.
- customized KUKA robot and a visual inspection system using a cobot for automated documentation of engine parts.
- robotic polishing system
'''Source transcript at:
https://vrtnis.github.io/transcript-viewer/?jsonUrl=https://gist.githubusercontent.com/vrtnis/3a025a21cbe367368b01fe8b97d8d9d9/raw/1f2962a16f7b6439d36f8067c8f1657a72408682/pratt_1-1948273645.json'''
[[File:Screenshot15-50-46.png|900px]]
598c2f0e534eb5c5bde1bfde426a69d103f97d07
1770
1769
2024-07-22T22:53:17Z
Vrtnis
21
wikitext
text/x-wiki
==== Notes from user calls ====
- how robotics simulators have changed processes and improved workflow.
- using MuJoCo for advanced simulations and engine design performance, mentioning issues with communicating these simulations to engineering and maintenance teams.
- open source simulation tools, such as creating a comprehensive simulation platform, exploring engine dynamics, and the ease of customization due to its open-source nature.
- example use case of MuJoCo providing self-serve exploration for maintenance teams, refining designs, developing predictive maintenance strategies, and reducing development costs.
- self-serve simulations work and their user-friendly design for maintenance personnel.
- customized KUKA robot and a visual inspection system using a cobot for automated documentation of engine parts.
- robotic polishing system
'''Source transcript at:
https://vrtnis.github.io/transcript-viewer/?jsonUrl=https://gist.githubusercontent.com/vrtnis/3a025a21cbe367368b01fe8b97d8d9d9/raw/1f2962a16f7b6439d36f8067c8f1657a72408682/pratt_1-1948273645.json'''
[[File:Screenshot15-50-46.png|900px]]
76fb32912827b6430784261856175f48238f1994
1771
1770
2024-07-22T22:53:45Z
Vrtnis
21
wikitext
text/x-wiki
==== Notes from user calls ====
- how robotics simulators have changed processes and improved workflow.
- using MuJoCo for advanced simulations and engine design performance, mentioning issues with communicating these simulations to engineering and maintenance teams.
- open source simulation tools, such as creating a comprehensive simulation platform, exploring engine dynamics, and the ease of customization due to its open-source nature.
- example use case of MuJoCo providing self-serve exploration for maintenance teams, refining designs, developing predictive maintenance strategies, and reducing development costs.
- self-serve simulations work and their user-friendly design for maintenance personnel.
- customized KUKA robot and a visual inspection system using a cobot for automated documentation of engine parts.
- robotic polishing system
'''Source transcript at:'''
=== https://vrtnis.github.io/transcript-viewer/?jsonUrl=https://gist.githubusercontent.com/vrtnis/3a025a21cbe367368b01fe8b97d8d9d9/raw/1f2962a16f7b6439d36f8067c8f1657a72408682/pratt_1-1948273645.json ===
[[File:Screenshot15-50-46.png|900px]]
516d6b4478a2db25923f0692151a52846531f5fc
File:Screenshot15-50-46.png
6
404
1766
2024-07-22T22:51:18Z
Vrtnis
21
Call transcript screenshot
wikitext
text/x-wiki
== Summary ==
Call transcript screenshot
0897791029faf53701958eb8f3362adc8ca37938
Main Page
0
1
1772
1705
2024-07-24T20:00:24Z
Ben
2
added a couple actuators
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== List of Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== List of Actuators ===
{| class="wikitable"
|-
! Actuator
! Notes
|-
| [[OBot]]
| Open-source actuator
|-
| [[SPIN Servo]]
| Open-source actuator
|-
| [[VESCular6]]
| A project based on [[VESC]]
|-
| [[ODrive]]
| A precision motor controller
|-
| [[Solo Motor Controller]]
| A motor controller alternative to the [[ODrive]].
|-
| [[J60]]
| Actuators built for the [[DEEP Robotics]] quadrupeds.
|-
| [[Robstride]]
|
|-
| [[MyActuator]]
|
|-
| [[K-Scale Motor Controller]]
| An open-source motor controller
|}
=== Discord community ===
[https://discord.gg/rhCy6UdBRD Discord]
63e8146dc2a86888bb4779f188ff0497fe9d2d43
1773
1772
2024-07-24T20:05:03Z
Ben
2
change sections around, split actuators and motor controllers
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Actuators ===
{| class="wikitable"
|-
! Actuator
|-
| [[SPIN Servo]]
|-
| [[J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|}
=== Motor Controllers ===
{| class="wikitable"
|-
! Controller
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[K-Scale Motor Controller]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Discord community ===
[https://discord.gg/rhCy6UdBRD Discord]
0a90d29b98382679269406f36929a7ff2af5eb76
1776
1773
2024-07-24T20:05:28Z
Ben
2
/* Actuators */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Actuators ===
{| class="wikitable"
|-
! Actuator
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|}
=== Motor Controllers ===
{| class="wikitable"
|-
! Controller
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[K-Scale Motor Controller]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Discord community ===
[https://discord.gg/rhCy6UdBRD Discord]
7ad6d2dc9ac1edb7d25f048a7743e755186889ea
1783
1776
2024-07-24T20:11:14Z
Ben
2
/* Actuators */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Actuators ===
{| class="wikitable"
|-
! Actuator
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|-
| [[Encos]]
|}
=== Motor Controllers ===
{| class="wikitable"
|-
! Controller
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[K-Scale Motor Controller]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Discord community ===
[https://discord.gg/rhCy6UdBRD Discord]
fdd120e83941ecb213f3af630ccdab1dbb721c85
DEEP Robotics J60
0
197
1774
816
2024-07-24T20:05:19Z
Ben
2
Ben moved page [[J60]] to [[DEEP Robotics J60]]
wikitext
text/x-wiki
The [https://www.deeprobotics.cn/en/index/j60.html J60] by [[DEEP Robotics]] is a family of high-performance actuators designed for use in quadruped and humanoid robots.
== Overview ==
The J60 actuators are high-performance actuators have been specifically designed for use in humanoid robots and in [[DEEP Robotics]] quadrupeds. These actuators are reliable and durable, featuring a high torque-to-weight ratio. The joints include a gear reduction, a frameless torque motor, a servo driver, and an absolute value encoder into one compact unit.
=== Actuators ===
{{infobox actuator
| name = J60-6
| manufacturer = DEEP Robotics
| link = https://www.deeprobotics.cn/en/index/j60.html
| peak_torque = 19.94 Nm
| peak_speed = 24.18 rad/s
| dimensions = 76.5mm diameter, 63mm length
| weight = 480g
| absolute_encoder_resolution = 14 bit
| operating_voltage_range = 12-36V
| standard_operating_voltage = 24V
| interface = CAN bus / RS485
| control_frequency = 1kHz
}}
{{infobox actuator
| name = J60-10
| manufacturer = DEEP Robotics
| link = https://www.deeprobotics.cn/en/index/j60.html
| peak_torque = 30.50 Nm
| peak_speed = 15.49 rad/s
| dimensions = 76.5mm diameter, 72.5mm length
| weight = 540g
| absolute_encoder_resolution = 14 bit
| operating_voltage_range = 12-36V
| standard_operating_voltage = 24V
| interface = CAN bus / RS485
| control_frequency = 1kHz
}}
[[Category: Actuators]]
b5cdee73845d295957f9422b4ae2f30130be176c
1777
1774
2024-07-24T20:05:41Z
Ben
2
/* Actuators */
wikitext
text/x-wiki
The [https://www.deeprobotics.cn/en/index/j60.html J60] by [[DEEP Robotics]] is a family of high-performance actuators designed for use in quadruped and humanoid robots.
== Overview ==
The J60 actuators are high-performance actuators have been specifically designed for use in humanoid robots and in [[DEEP Robotics]] quadrupeds. These actuators are reliable and durable, featuring a high torque-to-weight ratio. The joints include a gear reduction, a frameless torque motor, a servo driver, and an absolute value encoder into one compact unit.
=== Actuators ===
{{infobox actuator
| name = J60-6
| manufacturer = DEEP Robotics
| link = https://www.deeprobotics.cn/en/index/j60.html
| peak_torque = 19.94 Nm
| peak_speed = 24.18 rad/s
| dimensions = 76.5mm diameter, 63mm length
| weight = 480g
| absolute_encoder_resolution = 14 bit
| operating_voltage_range = 12-36V
| standard_operating_voltage = 24V
| interface = CAN bus / RS485
| control_frequency = 1kHz
}}
{{infobox actuator
| name = J60-10
| manufacturer = DEEP Robotics
| link = https://www.deeprobotics.cn/en/index/j60.html
| peak_torque = 30.50 Nm
| peak_speed = 15.49 rad/s
| dimensions = 76.5mm diameter, 72.5mm length
| weight = 540g
| absolute_encoder_resolution = 14 bit
| operating_voltage_range = 12-36V
| standard_operating_voltage = 24V
| interface = CAN bus / RS485
| control_frequency = 1kHz
}}
[[Category: Actuators]]
b2bcdb453bf0f1c0a488c65cfede379b5ca03e6d
J60
0
405
1775
2024-07-24T20:05:19Z
Ben
2
Ben moved page [[J60]] to [[DEEP Robotics J60]]
wikitext
text/x-wiki
#REDIRECT [[DEEP Robotics J60]]
c07b4fe6c14364c1abf7935d2d63b9bb5c72064e
Robstride
0
406
1778
2024-07-24T20:06:54Z
Ben
2
Created page with " {{infobox actuator | name = RobStride 01 | manufacturer = RobStride | link = https://robstride.com/products/robStride01 | peak_torque = 17 Nm | peak_speed = | dimensions = |..."
wikitext
text/x-wiki
{{infobox actuator
| name = RobStride 01
| manufacturer = RobStride
| link = https://robstride.com/products/robStride01
| peak_torque = 17 Nm
| peak_speed =
| dimensions =
| weight =
| absolute_encoder_resolution =
| operating_voltage_range =
| standard_operating_voltage =
| interface = CAN
| control_frequency =
}}
600f2c0ef2c11943d47de206fc6ee56ac52d46cf
1779
1778
2024-07-24T20:08:18Z
Ben
2
wikitext
text/x-wiki
[https://robstride.com/ Robstride] is an actuator manufacturing startup. The company was founded in 2024 by founders from Xiaomi that worked on the CyberDog and CyberGear projects.
== Actutators ==
{{infobox actuator
| name = RobStride 01
| manufacturer = RobStride
| link = https://robstride.com/products/robStride01
| peak_torque = 17 Nm
| peak_speed =
| dimensions =
| weight =
| absolute_encoder_resolution =
| operating_voltage_range =
| standard_operating_voltage =
| interface = CAN
| control_frequency =
}}
701a0bb15d21f90d5e0770ab376cf93a40e9d295
1780
1779
2024-07-24T20:08:46Z
Ben
2
/* Actutators */
wikitext
text/x-wiki
[https://robstride.com/ Robstride] is an actuator manufacturing startup. The company was founded in 2024 by founders from Xiaomi that worked on the CyberDog and CyberGear projects.
== Actutators ==
{{infobox actuator
| name = RobStride 01
| manufacturer = RobStride
| link = https://robstride.com/products/robStride01
| peak_torque = 17 Nm
| peak_speed =
| dimensions =
| weight =
| absolute_encoder_resolution =
| operating_voltage_range =
| standard_operating_voltage =
| interface = CAN
| control_frequency =
}}
{{infobox actuator
| name = RobStride 04
| manufacturer = RobStride
| link = https://robstride.com/products/robStride04
| peak_torque = 120 Nm
| peak_speed =
| dimensions =
| weight =
| absolute_encoder_resolution =
| operating_voltage_range =
| standard_operating_voltage =
| interface = CAN
| control_frequency =
}}
d95d3df3e14ee60e1682130520325b3a4e95af2f
1781
1780
2024-07-24T20:09:51Z
Ben
2
/* Actutators */
wikitext
text/x-wiki
[https://robstride.com/ Robstride] is an actuator manufacturing startup. The company was founded in 2024 by founders from Xiaomi that worked on the CyberDog and CyberGear projects.
== Actutators ==
{{infobox actuator
| name = RobStride 01
| manufacturer = RobStride
| purchase_link = https://robstride.com/products/robStride01
| peak_torque = 17 Nm
| peak_speed =
| dimensions =
| weight =
| absolute_encoder_resolution =
| operating_voltage_range =
| standard_operating_voltage =
| interface = CAN
| control_frequency =
}}
{{infobox actuator
| name = RobStride 04
| manufacturer = RobStride
| purchase_link = https://robstride.com/products/robStride04
| peak_torque = 120 Nm
| peak_speed =
| dimensions =
| weight =
| absolute_encoder_resolution =
| operating_voltage_range =
| standard_operating_voltage =
| interface = CAN
| control_frequency =
}}
a93cdab5b7555e84cd309aba28be6f326ad02420
1782
1781
2024-07-24T20:10:25Z
Ben
2
/* Actutators */
wikitext
text/x-wiki
[https://robstride.com/ Robstride] is an actuator manufacturing startup. The company was founded in 2024 by founders from Xiaomi that worked on the CyberDog and CyberGear projects.
== Actutators ==
{{infobox actuator
| name = RobStride 01
| manufacturer = RobStride
| purchase_link = https://robstride.com/products/robStride01
| peak_torque = 17 Nm
| peak_speed =
| dimensions =
| weight =
| absolute_encoder_resolution =
| voltage = 48V
| interface = CAN
| control_frequency =
}}
{{infobox actuator
| name = RobStride 04
| manufacturer = RobStride
| purchase_link = https://robstride.com/products/robStride04
| peak_torque = 120 Nm
| peak_speed =
| dimensions =
| weight =
| absolute_encoder_resolution =
| voltage = 48V
| interface = CAN
| control_frequency =
}}
4dbf89e031740926480092778c5a3966e5f659a0
File:EC-A10020-P1-12-6.png
6
407
1784
2024-07-24T20:15:45Z
Ben
2
wikitext
text/x-wiki
EC-A10020-P1-12-6
39da5e4a9f286ecc28e7b933c023c101dee15695
Encos
0
408
1785
2024-07-24T20:15:55Z
Ben
2
Created page with "[http://encos.cn/ Encos] is a Chinese actuator manufacturer {{infobox actuator | name = EC-A10020-P1-12/6 | manufacturer = Encos | purchase_link = http://encos.cn/ProDetail.a..."
wikitext
text/x-wiki
[http://encos.cn/ Encos] is a Chinese actuator manufacturer
{{infobox actuator
| name = EC-A10020-P1-12/6
| manufacturer = Encos
| purchase_link = http://encos.cn/ProDetail.aspx?ProID=183
| nominal_torque = 50 Nm
| peak_torque = 150 Nm
| peak_speed =
| dimensions =
| weight = 1.35 Kg
| absolute_encoder_resolution =
| voltage = 48V
| interface = CAN
| control_frequency = 2000 Hz
}}
[[File:EC-A10020-P1-12-6.png|thumb]]
3f822a7914d47e1f1f582e5358a611692d146357
File:EC-A8112-P1-18.png
6
409
1786
2024-07-24T20:17:24Z
Ben
2
wikitext
text/x-wiki
EC-A8112-P1-18
ba3d328a1a5125ccba90523ea28032e852b51dd7
File:EC-A4310-P2-36.png
6
410
1787
2024-07-24T20:18:22Z
Ben
2
wikitext
text/x-wiki
EC-A4310-P2-36
0e15c25d87939644dec63563af8b7991c0da388c
File:EC-A8120-P1-6.png
6
411
1788
2024-07-24T20:19:45Z
Ben
2
wikitext
text/x-wiki
EC-A8120-P1-6
cb53eb7e8cde55ea6ff823422aaf19598f44dcb6
Encos
0
408
1789
1785
2024-07-24T20:19:57Z
Ben
2
add more actuators
wikitext
text/x-wiki
[http://encos.cn/ Encos] is a Chinese actuator manufacturer
{{infobox actuator
| name = EC-A10020-P1-12/6
| manufacturer = Encos
| purchase_link = http://encos.cn/ProDetail.aspx?ProID=183
| nominal_torque = 50 Nm
| peak_torque = 150 Nm
| peak_speed =
| dimensions =
| weight = 1.35 Kg
| absolute_encoder_resolution =
| voltage = 48V
| interface = CAN
| control_frequency = 2000 Hz
}}
[[File:EC-A10020-P1-12-6.png|thumb|none|EC-A10020-P1-12/6]]
{{infobox actuator
| name = EC-A8112-P1-18
| manufacturer = Encos
| purchase_link = http://encos.cn/ProDetail.aspx?ProID=203
| nominal_torque = 30 Nm
| peak_torque = 90 Nm
| peak_speed =
| dimensions =
| weight = 840 g
| absolute_encoder_resolution =
| voltage = 48V
| interface = CAN
| control_frequency = 2000 Hz
}}
[[File:EC-A8112-P1-18.png|thumb|none|EC-A8112-P1-18]]
{{infobox actuator
| name = EC-A8112-P1-18
| manufacturer = Encos
| purchase_link = http://encos.cn/ProDetail.aspx?ProID=204
| nominal_torque = 30 Nm
| peak_torque = 90 Nm
| peak_speed =
| dimensions =
| weight = 840 g
| absolute_encoder_resolution =
| voltage = 48V
| interface = CAN
| control_frequency = 2000 Hz
}}
[[File:EC-A4310-P2-36.png|thumb|none|EC-A4310-P2-36]]
{{infobox actuator
| name = EC-A8120-P1-6
| manufacturer = Encos
| purchase_link = http://encos.cn/ProDetail.aspx?ProID=205
| nominal_torque = 15 Nm
| peak_torque = 50 Nm
| peak_speed =
| dimensions =
| weight = 890 g
| absolute_encoder_resolution =
| voltage = 48V
| interface = CAN
| control_frequency = 2000 Hz
}}
[[File:EC-A8120-P1-6.png|thumb|none|EC-A8120-P1-6]]
43b3c5567dab4fa7ff608b70580df28d01e46253
1790
1789
2024-07-24T20:21:46Z
Ben
2
correct information, move images around
wikitext
text/x-wiki
[http://encos.cn/ Encos] is a Chinese actuator manufacturer
[[File:EC-A10020-P1-12-6.png|thumb|right|EC-A10020-P1-12/6]]
{{infobox actuator
| name = EC-A10020-P1-12/6
| manufacturer = Encos
| purchase_link = http://encos.cn/ProDetail.aspx?ProID=183
| nominal_torque = 50 Nm
| peak_torque = 150 Nm
| peak_speed =
| dimensions =
| weight = 1.35 Kg
| absolute_encoder_resolution =
| voltage = 48V
| interface = CAN
| control_frequency = 2000 Hz
}}
[[File:EC-A8112-P1-18.png|thumb|right|EC-A8112-P1-18]]
{{infobox actuator
| name = EC-A8112-P1-18
| manufacturer = Encos
| purchase_link = http://encos.cn/ProDetail.aspx?ProID=203
| nominal_torque = 30 Nm
| peak_torque = 90 Nm
| peak_speed =
| dimensions =
| weight = 840 g
| absolute_encoder_resolution =
| voltage = 48V
| interface = CAN
| control_frequency = 2000 Hz
}}
[[File:EC-A4310-P2-36.png|thumb|right|EC-A4310-P2-36]]
{{infobox actuator
| name = EC-A4310-P2-36
| manufacturer = Encos
| purchase_link = http://encos.cn/ProDetail.aspx?ProID=204
| nominal_torque = 12 Nm
| peak_torque = 36 Nm
| peak_speed =
| dimensions =
| weight = 377 g
| absolute_encoder_resolution =
| voltage = 48V
| interface = CAN
| control_frequency = 2000 Hz
}}
[[File:EC-A8120-P1-6.png|thumb|right|EC-A8120-P1-6]]
{{infobox actuator
| name = EC-A8120-P1-6
| manufacturer = Encos
| purchase_link = http://encos.cn/ProDetail.aspx?ProID=205
| nominal_torque = 15 Nm
| peak_torque = 50 Nm
| peak_speed =
| dimensions =
| weight = 890 g
| absolute_encoder_resolution =
| voltage = 48V
| interface = CAN
| control_frequency = 2000 Hz
}}
f3c3f352522eb5bd835f8166f7515cb5a023fab7
1795
1790
2024-07-24T20:35:36Z
Ben
2
wikitext
text/x-wiki
[http://encos.cn/ Encos] is a Chinese actuator manufacturer.
[[File:EC-A10020-P1-12-6.png|thumb|right|EC-A10020-P1-12/6]]
{{infobox actuator
| name = EC-A10020-P1-12/6
| manufacturer = Encos
| purchase_link = http://encos.cn/ProDetail.aspx?ProID=183
| nominal_torque = 50 Nm
| peak_torque = 150 Nm
| peak_speed =
| dimensions =
| weight = 1.35 Kg
| absolute_encoder_resolution =
| voltage = 48V
| interface = CAN
| control_frequency = 2000 Hz
}}
[[File:EC-A8112-P1-18.png|thumb|right|EC-A8112-P1-18]]
{{infobox actuator
| name = EC-A8112-P1-18
| manufacturer = Encos
| purchase_link = http://encos.cn/ProDetail.aspx?ProID=203
| nominal_torque = 30 Nm
| peak_torque = 90 Nm
| peak_speed =
| dimensions =
| weight = 840 g
| absolute_encoder_resolution =
| voltage = 48V
| interface = CAN
| control_frequency = 2000 Hz
}}
[[File:EC-A4310-P2-36.png|thumb|right|EC-A4310-P2-36]]
{{infobox actuator
| name = EC-A4310-P2-36
| manufacturer = Encos
| purchase_link = http://encos.cn/ProDetail.aspx?ProID=204
| nominal_torque = 12 Nm
| peak_torque = 36 Nm
| peak_speed =
| dimensions =
| weight = 377 g
| absolute_encoder_resolution =
| voltage = 48V
| interface = CAN
| control_frequency = 2000 Hz
}}
[[File:EC-A8120-P1-6.png|thumb|right|EC-A8120-P1-6]]
{{infobox actuator
| name = EC-A8120-P1-6
| manufacturer = Encos
| purchase_link = http://encos.cn/ProDetail.aspx?ProID=205
| nominal_torque = 15 Nm
| peak_torque = 50 Nm
| peak_speed =
| dimensions =
| weight = 890 g
| absolute_encoder_resolution =
| voltage = 48V
| interface = CAN
| control_frequency = 2000 Hz
}}
1dd58848943c3e1c055e6ad4d230d927ada7a71f
Main Page
0
1
1791
1783
2024-07-24T20:28:32Z
Ben
2
/* Actuators */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Actuators ===
{| class="wikitable"
|-
! Actuator
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|-
| [[Encos]]
|-
| [[Steadywin]]
|}
=== Motor Controllers ===
{| class="wikitable"
|-
! Controller
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[K-Scale Motor Controller]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Discord community ===
[https://discord.gg/rhCy6UdBRD Discord]
c72fe58381bfc5362eed2b82ce1e73d57e74676d
1808
1791
2024-08-13T18:43:44Z
Ben
2
/* Actuators */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Actuators ===
{| class="wikitable"
|-
! Actuator
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|-
| [[Encos]]
|-
| [[Steadywin]]
|-
| [[Elemental Motors]]
|}
=== Motor Controllers ===
{| class="wikitable"
|-
! Controller
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[K-Scale Motor Controller]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Discord community ===
[https://discord.gg/rhCy6UdBRD Discord]
fd610b0436105b3fbd13757cea15dfa0c07efd7f
1825
1808
2024-08-29T18:46:41Z
Ben
2
/* Discord community */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Actuators ===
{| class="wikitable"
|-
! Actuator
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|-
| [[Encos]]
|-
| [[Steadywin]]
|-
| [[Elemental Motors]]
|}
=== Motor Controllers ===
{| class="wikitable"
|-
! Controller
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[K-Scale Motor Controller]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Discord community ===
[https://discord.gg/kscale Discord]
14960d96ca5c8a9180d5a63d9851a55f7926e115
File:Steadywin-gim10015-9.jpg
6
412
1792
2024-07-24T20:31:19Z
Ben
2
wikitext
text/x-wiki
Steadywin-gim10015-9
0e5df7b138e2e14659e3aa0c57d565acf7167885
Steadywin
0
413
1793
2024-07-24T20:31:31Z
Ben
2
Created page with "[http://steadywin.cn/en/ Steadywin] is a Chinese actuator manufacturer [[File:Steadywin-gim10015-9.jpg|thumb|right|GIM10015-9]] {{infobox actuator | name = GIM10015-9 | manu..."
wikitext
text/x-wiki
[http://steadywin.cn/en/ Steadywin] is a Chinese actuator manufacturer
[[File:Steadywin-gim10015-9.jpg|thumb|right|GIM10015-9]]
{{infobox actuator
| name = GIM10015-9
| manufacturer = Steadywin
| purchase_link = http://steadywin.cn/en/pd.jsp?id=114#_jcp=3_3
| nominal_torque = 28 Nm
| peak_torque = 70 Nm
| peak_speed =
| dimensions =
| weight =
| absolute_encoder_resolution =
| voltage = 48V
| interface = CAN / RS485
| control_frequency =
}}
135b0567e8b756d85dbaec3b299c6f6f834542e8
1794
1793
2024-07-24T20:34:26Z
Ben
2
wikitext
text/x-wiki
[http://steadywin.cn/en/ Steadywin] is a Chinese actuator manufacturer
[[File:Steadywin-gim10015-9.jpg|thumb|right|GIM10015-9]]
{{infobox actuator
| name = GIM10015-9
| manufacturer = Steadywin
| purchase_link = http://steadywin.cn/en/pd.jsp?id=114#_jcp=3_3
| nominal_torque = 28 Nm
| peak_torque = 70 Nm
| peak_speed =
| dimensions =
| weight =
| absolute_encoder_resolution =
| voltage = 48V
| interface = CAN / RS485
| control_frequency =
}}
{{infobox actuator
| name = GIM8115-36
| manufacturer = Steadywin
| purchase_link = http://steadywin.cn/en/pd.jsp?id=19#_jcp=3_3
| nominal_torque = 54 Nm
| peak_torque = 150 Nm
| peak_speed =
| dimensions =
| weight =
| absolute_encoder_resolution =
| voltage = 48V
| interface = CAN
| control_frequency =
}}
d0bdd55561fc88847d6367cb3e96fedee96aea57
MyActuator
0
414
1796
2024-07-24T21:28:03Z
Vrtnis
21
/*create page*/
wikitext
text/x-wiki
'''Suzhou Micro Actuator Technology Co., Ltd''' (MyActuator) specializes in high-performance brushless servo motors, ideal for a range of applications including robots and portable devices. Their products, noted for their compact yet powerful design, offer versatility and precision, making them a top choice for robotics enthusiasts and professionals alike.
0cd7b178c0fd7aa1e3ed6c4b56ed648ff5485d2d
1797
1796
2024-07-24T21:29:46Z
Vrtnis
21
/*add RMD-X6-P8-8*/
wikitext
text/x-wiki
'''Suzhou Micro Actuator Technology Co., Ltd''' (MyActuator) specializes in high-performance brushless servo motors, ideal for a range of applications including robots and portable devices. Their products, noted for their compact yet powerful design, offer versatility and precision, making them a top choice for robotics enthusiasts and professionals alike.
{{infobox actuator
| name = RMD-X6-P8-8
| manufacturer = Suzhou Micro Actuator Technology Co., Ltd.
| cost = USD 1
| purchase_link = https://www.myactuator.com/x6-8-details
| nominal_torque = 8 Nm
| peak_torque = 8 Nm
| weight = 1 kg
| dimensions = 10cm radius
| gear_ratio = 1:8
| voltage = 48V
| interface = CAN
| gear_type = Planetary
}}
df0d9ffee825b9ed9b206febef4fdd7abad204c3
1798
1797
2024-07-24T21:33:00Z
Vrtnis
21
wikitext
text/x-wiki
'''Suzhou Micro Actuator Technology Co., Ltd''' (MyActuator) specializes in high-performance brushless servo motors, ideal for a range of applications including robots and portable devices. Their products, noted for their compact yet powerful design, offer versatility and precision, making them a top choice for robotics enthusiasts and professionals alike.
{{infobox actuator
| name = RMD-X6-P8-8
| manufacturer = Suzhou Micro Actuator Technology Co., Ltd.
| purchase_link = https://www.myactuator.com/x6-8-details
| nominal_torque = 8 Nm
| peak_torque = 8 Nm
| weight = 1 kg
| dimensions = 10cm radius
| gear_ratio = 1:8
| voltage = 48V
| interface = CAN
| gear_type = Planetary
}}
0a6cef1d1d8050714d2987b835ebe719b6e7f647
1799
1798
2024-07-24T22:08:27Z
Vrtnis
21
/*RMD-X8-P6-20-C-N*/
wikitext
text/x-wiki
'''Suzhou Micro Actuator Technology Co., Ltd''' (MyActuator) specializes in high-performance brushless servo motors, ideal for a range of applications including robots and portable devices. Their products, noted for their compact yet powerful design, offer versatility and precision, making them a top choice for robotics enthusiasts and professionals alike.
{{infobox actuator
| name = RMD-X6-P8-8
| manufacturer = Suzhou Micro Actuator Technology Co., Ltd.
| purchase_link = https://www.myactuator.com/x6-8-details
| nominal_torque = 8 Nm
| peak_torque = 8 Nm
| weight = 1 kg
| dimensions = 10cm radius
| gear_ratio = 1:8
| voltage = 48V
| interface = CAN
| gear_type = Planetary
}}
{{infobox actuator
| name = RMD-X8-P6-20-C-N
| manufacturer = Suzhou Micro Actuator Technology Co., Ltd.
| purchase_link = https://www.myactuator.com/x8-20-details
| peak_torque = 20 Nm
| gear_ratio = 1:6
| interface = CAN
| gear_type = Planetary
}}
1e72aa6e1accda6487fc728827c0483d401585b0
1800
1799
2024-07-24T22:14:30Z
Vrtnis
21
wikitext
text/x-wiki
'''Suzhou Micro Actuator Technology Co., Ltd''' (MyActuator) specializes in high-performance brushless servo motors, ideal for a range of applications including robots and portable devices. Their products, noted for their compact yet powerful design, offer versatility and precision, making them a top choice for robotics enthusiasts and professionals alike.
{{infobox actuator
| name = RMD-X6-P8-8
| manufacturer = Suzhou Micro Actuator Technology Co., Ltd.
| purchase_link = https://www.myactuator.com/x6-8-details
| nominal_torque = 8 Nm
| peak_torque = 8 Nm
| weight = 1 kg
| dimensions = 10cm radius
| gear_ratio = 1:8
| voltage = 48V
| interface = CAN
| gear_type = Planetary
}}
{{infobox actuator
| name = RMD-X8-P6-20-C-N
| manufacturer = Suzhou Micro Actuator Technology Co., Ltd.
| purchase_link = https://www.myactuator.com/x8-20-details
| peak_torque = 20 Nm
| gear_ratio = 1:6
| interface = CAN
| gear_type = Planetary
}}
{{infobox actuator
| name = RMD-X8-P36-60-C-N
| manufacturer = Suzhou Micro Actuator Technology Co., Ltd.
| purchase_link = https://www.myactuator.com/x8-60-details
| peak_torque = 60 Nm
| gear_ratio = 1:36
| interface = CAN
| gear_type = Planetary
}}
ef1b8d5a5970c8829579e972e8ba4ec0df6ad372
1801
1800
2024-07-24T22:29:56Z
Vrtnis
21
wikitext
text/x-wiki
'''Suzhou Micro Actuator Technology Co., Ltd''' (MyActuator) specializes in high-performance brushless servo motors, ideal for a range of applications including robots and portable devices. Their products, noted for their compact yet powerful design, offer versatility and precision, making them a top choice for robotics enthusiasts and professionals alike.
{{infobox actuator
| name = RMD-X6-P8-8
| manufacturer = Suzhou Micro Actuator Technology Co., Ltd.
| purchase_link = https://www.myactuator.com/x6-8-details
| nominal_torque = 8 Nm
| peak_torque = 8 Nm
| weight = 1 kg
| dimensions = 10cm radius
| gear_ratio = 1:8
| voltage = 48V
| interface = CAN
| gear_type = Planetary
}}
{{infobox actuator
| name = RMD-X8-P6-20-C-N
| manufacturer = Suzhou Micro Actuator Technology Co., Ltd.
| purchase_link = https://www.myactuator.com/x8-20-details
| peak_torque = 20 Nm
| gear_ratio = 1:6
| interface = CAN
| gear_type = Planetary
}}
{{infobox actuator
| name = RMD-X8-P36-60-C-N
| manufacturer = Suzhou Micro Actuator Technology Co., Ltd.
| purchase_link = https://www.myactuator.com/x8-60-details
| peak_torque = 60 Nm
| gear_ratio = 1:36
| interface = CAN
| gear_type = Planetary
}}
[[Category:Companies]]
e6ab3243426e6888f8015184537ac0e1b3205ead
1803
1801
2024-07-24T22:38:06Z
Vrtnis
21
/* Add HQ image*/
wikitext
text/x-wiki
'''Suzhou Micro Actuator Technology Co., Ltd''' (MyActuator) specializes in high-performance brushless servo motors, ideal for a range of applications including robots and portable devices. Their products, noted for their compact yet powerful design, offer versatility and precision, making them a top choice for robotics enthusiasts and professionals alike.
[[File:Myactuator hq.jpg|thumb|MyActuator HQ Suzhou]]
{{infobox actuator
| name = RMD-X6-P8-8
| manufacturer = Suzhou Micro Actuator Technology Co., Ltd.
| purchase_link = https://www.myactuator.com/x6-8-details
| nominal_torque = 8 Nm
| peak_torque = 8 Nm
| weight = 1 kg
| dimensions = 10cm radius
| gear_ratio = 1:8
| voltage = 48V
| interface = CAN
| gear_type = Planetary
}}
{{infobox actuator
| name = RMD-X8-P6-20-C-N
| manufacturer = Suzhou Micro Actuator Technology Co., Ltd.
| purchase_link = https://www.myactuator.com/x8-20-details
| peak_torque = 20 Nm
| gear_ratio = 1:6
| interface = CAN
| gear_type = Planetary
}}
{{infobox actuator
| name = RMD-X8-P36-60-C-N
| manufacturer = Suzhou Micro Actuator Technology Co., Ltd.
| purchase_link = https://www.myactuator.com/x8-60-details
| peak_torque = 60 Nm
| gear_ratio = 1:36
| interface = CAN
| gear_type = Planetary
}}
[[Category:Companies]]
8d8515e2dff1fb4f610c9f5c3a0d84e04f509325
File:Myactuator hq.jpg
6
415
1802
2024-07-24T22:36:55Z
Vrtnis
21
MyActuator HQ image
wikitext
text/x-wiki
== Summary ==
MyActuator HQ image
0400c6bbfdc428c45afd5448335d5ceae3d28f8f
K-Scale Weekly Progress Updates
0
294
1804
1720
2024-07-26T16:02:44Z
Ben
2
wikitext
text/x-wiki
[[File:Robot taking notes.png|thumb|Robot taking notes (from Midjourney)]]
{| class="wikitable"
|-
! Link
|-
| [https://x.com/kscalelabs/status/1815523839407006189 2024.07.19]
|-
| [https://x.com/kscalelabs/status/1811805432505336073 2024.07.12]
|-
| [https://x.com/kscalelabs/status/1809263616958374286 2024.07.05]
|-
| [https://x.com/kscalelabs/status/1804184936574030284 2024.06.21]
|-
| [https://x.com/kscalelabs/status/1801749382167204086 2024.06.14]
|-
| [https://x.com/kscalelabs/status/1799197382208590132 2024.06.07]
|-
| [https://x.com/kscalelabs/status/1796617681455775944 2024.05.31]
|-
| [https://x.com/kscalelabs/status/1794109131214712914 2024.05.24]
|-
| [https://x.com/kscalelabs/status/1791507358780461496 2024.05.17]
|-
| [https://x.com/kscalelabs/status/1788968705378181145 2024.05.10]
|}
[[Category:K-Scale]]
12e10bd1d266c0884a8633dac1af996d136fe898
1807
1804
2024-08-02T16:49:11Z
Ben
2
wikitext
text/x-wiki
[[File:Robot taking notes.png|thumb|Robot taking notes (from Midjourney)]]
{| class="wikitable"
|-
! Link
|-
| [https://x.com/kscalelabs/status/1819413752539935164 2024.08.02]
|-
| [https://x.com/kscalelabs/status/1816987089386553543 2024.07.26]
|-
| [https://x.com/kscalelabs/status/1815523839407006189 2024.07.19]
|-
| [https://x.com/kscalelabs/status/1811805432505336073 2024.07.12]
|-
| [https://x.com/kscalelabs/status/1809263616958374286 2024.07.05]
|-
| [https://x.com/kscalelabs/status/1804184936574030284 2024.06.21]
|-
| [https://x.com/kscalelabs/status/1801749382167204086 2024.06.14]
|-
| [https://x.com/kscalelabs/status/1799197382208590132 2024.06.07]
|-
| [https://x.com/kscalelabs/status/1796617681455775944 2024.05.31]
|-
| [https://x.com/kscalelabs/status/1794109131214712914 2024.05.24]
|-
| [https://x.com/kscalelabs/status/1791507358780461496 2024.05.17]
|-
| [https://x.com/kscalelabs/status/1788968705378181145 2024.05.10]
|}
[[Category:K-Scale]]
02b05fc865118fb3b04ebcb8894034132adb1b41
OpenSCAD
0
416
1805
2024-08-01T10:29:34Z
Ben
2
openscad notes
wikitext
text/x-wiki
[https://openscad.org/ OpenSCAD] is a programmatic CAD tool.
There are some nice extensions to make dealing with OpenSCAD less painful:
- [https://github.com/BelfrySCAD/BOSL2 BOSL2], which stands for the Belfry OpenSCAD Library, is a collection of implementations of OpenSCAD components.
- [https://pypi.org/project/solidpython2/ SolidPython2] is a Python frontend for OpenSCAD which converts to the OpenSCAD DSL.
69f56a42425e465996093786b342e58cce18e371
1806
1805
2024-08-01T10:30:01Z
Ben
2
fix lists
wikitext
text/x-wiki
[https://openscad.org/ OpenSCAD] is a programmatic CAD tool.
There are some nice extensions to make dealing with OpenSCAD less painful:
* [https://github.com/BelfrySCAD/BOSL2 BOSL2], which stands for the Belfry OpenSCAD Library, is a collection of implementations of OpenSCAD components.
* [https://pypi.org/project/solidpython2/ SolidPython2] is a Python frontend for OpenSCAD which converts to the OpenSCAD DSL.
3d13c6df39bf774ca3a0eb4f7359b93bc8908601
Elemental Motors
0
417
1809
2024-08-13T18:49:03Z
Ben
2
Created page with "[https://www.elementalmotors.com/ Elemental Motors] is building high-torque direct drive motors."
wikitext
text/x-wiki
[https://www.elementalmotors.com/ Elemental Motors] is building high-torque direct drive motors.
d5de29643dcea63a95b3eecb8e73eefba4f9f28b
1810
1809
2024-08-13T18:55:15Z
Ben
2
add motors
wikitext
text/x-wiki
[https://www.elementalmotors.com/ Elemental Motors] is building high-torque direct drive motors.
{{infobox actuator
| name = G4 57 Inner Rotator
| manufacturer = Elemental Motors
| nominal_torque = 3.4 Nm
| peak_torque = 5.43 Nm
| peak_speed = 930 RPM
| dimensions = OD: 57mm, ID: 22mm, Length: 23mm
| weight = 0.25 kg
| voltage = 48V
| interface = N/A
| control_frequency = N/A
}}
{{infobox actuator
| name = G3 114 12T Inner Rotator
| manufacturer = Elemental Motors
| nominal_torque = 25 Nm
| peak_torque = 53 Nm
| peak_speed = 627 RPM
| dimensions = OD: 114mm, ID: 60mm, Length: 38.8mm
| weight = 1.5 kg
| voltage = 48V
| interface = N/A
| control_frequency = N/A
}}
{{infobox actuator
| name = G4 95 Inner Rotator
| manufacturer = Elemental Motors
| nominal_torque = 12.6 Nm
| peak_torque = 35.4 Nm
| peak_speed = 253 RPM
| dimensions = OD: 95mm, ID: 45mm, Length: 38.5mm
| weight = 1.26 kg
| voltage = 48V
| interface = N/A
| control_frequency = N/A
}}
{{infobox actuator
| name = G4 115 42T Inner Rotator
| manufacturer = Elemental Motors
| nominal_torque = 35.7 Nm
| peak_torque = 74.1 Nm
| peak_speed = 155 RPM
| dimensions = OD: 115mm, ID: 60mm, Length: 46mm
| weight = 2.1 kg
| voltage = 48V
| interface = N/A
| control_frequency = N/A
}}
{{infobox actuator
| name = G4 170 Inner Rotator
| manufacturer = Elemental Motors
| nominal_torque = 108 Nm
| peak_torque = 212 Nm
| peak_speed = 50 RPM
| dimensions = OD: 170mm, ID: 106mm, Length: 60mm
| weight = 4 kg
| voltage = 48V
| interface = N/A
| control_frequency = N/A
}}
9b9e9eade14f10fb3b0d5facd6982bbf65c5db2b
1X
0
10
1811
216
2024-08-17T06:34:03Z
Joshc
75
updated overview and history
wikitext
text/x-wiki
[https://www.1x.tech/ 1X] (formerly known as Halodi Robotics) is a humanoid robotics company based in Moss, Norway. The company was founded in 2014 with the mission to create an abundant supply of labor via safe, intelligent robots.<ref>https://www.1x.tech/about</ref> They have two robots: [[Eve]] and [[Neo]]. Eve was the first generation, a wheeled robot designed to train their AI models and collect real-world data. Neo is their second model, an improved, bipedal version of Eve which is still under development. Eve and Neo are designed for safe human interaction by reducing actuator inertia. The goal for Eve is to work in logistics and guarding, while Neo is intended to be used in the home.
{{infobox company
| name = 1X Technologies
| country = United States
| website_link = https://www.1x.tech/
| robots = [[Eve]], [[Neo]]
}}
The company is known for its high torque brushless direct current (BLDC) motor Revo1 that they developed in house. Those BLDC motors are paired with low gear ratio cable drives, and are currently the highest torque-to-weight direct drive motor in the world.
== History ==
In 2020, 1X partnered with Everon by ADT Commercial, a commercial security solutions company, to deploy 150-250 humanoid robots in various buildings across the US for night guarding. This is a unique use-case for humanoids because other companies such as Figure and Tesla have been focusing on warehouse and factory applications.
In March 2023, the company closed a funding round of $23.5 million, led by the OpenAI Startup Fund, along with other investors including Tiger Global, Sandwater, Alliance Ventures, and Skagerak Capital.<ref>https://robotsguide.com/robots/eve</ref> The partnership with OpenAI has helped significantly for developing the AI system used in their humanoids.
In January 2024, 1X announced that they raised a $100 million Series B led by EQT Ventures, including Samsung NEXT and existing investors Tiger Global and Nistad Group.<ref>https://www.therobotreport.com/1x-technologies-raises-100m-series-b-advance-neo-humanoid-robot</ref>
== References ==
[[Category:Companies]]
15508cb60101814803001c9ad192b1b341804508
1813
1811
2024-08-17T17:13:24Z
Joshc
75
wikitext
text/x-wiki
[https://www.1x.tech/ 1X] (formerly known as Halodi Robotics) is a humanoid robotics company based in Moss, Norway. The company was founded in 2014 with the mission to create an abundant supply of labor via safe, intelligent robots.<ref>https://www.1x.tech/about</ref> They have two robots: [[Eve]] and [[Neo]]. Eve was the first generation, a wheeled robot designed to train their AI models and collect real-world data. Neo is their second model, an improved, bipedal version of Eve which is still under development. Eve and Neo are designed for safe human interaction by reducing actuator inertia. The goal for Eve is to work in logistics and guarding, while Neo is intended to be used in the home.
{{infobox company
| name = 1X Technologies
| country = United States
| website_link = https://www.1x.tech/
| robots = [[Eve]], [[Neo]]
}}
The company is known for its high torque brushless direct current (BLDC) motor Revo1 that they developed in house. Those BLDC motors are paired with low gear ratio cable drives, and are currently the highest torque-to-weight direct drive motor in the world.
== History ==
In 2020, 1X partnered with Everon by ADT Commercial, a commercial security solutions company, to deploy 150-250 humanoid robots in various buildings across the US for night guarding.
In March 2023, the company closed a funding round of $23.5 million, led by the OpenAI Startup Fund, along with other investors including Tiger Global, Sandwater, Alliance Ventures, and Skagerak Capital.<ref>https://robotsguide.com/robots/eve</ref> The partnership with OpenAI has helped significantly for developing the AI system used in their humanoids.
In January 2024, 1X announced that they raised a $100 million Series B led by EQT Ventures, including Samsung NEXT and existing investors Tiger Global and Nistad Group.<ref>https://www.therobotreport.com/1x-technologies-raises-100m-series-b-advance-neo-humanoid-robot</ref>
== References ==
[[Category:Companies]]
42eb70ad70b772d124ad5ba4997c544d80886496
Eve
0
54
1812
1640
2024-08-17T06:45:03Z
Joshc
75
wikitext
text/x-wiki
EVE is a versatile and agile humanoid robot developed by [[1X]]. It is equipped with cameras and sensors to perceive and interact with its surroundings. EVE’s mobility, dexterity, and balance allow it to navigate complex environments and manipulate objects effectively.
<youtube>https://www.youtube.com/watch?v=20GHG-R9eFI</youtube>
{{infobox robot
| name = EVE
| organization = [[1X]]
| height = 186 cm
| weight = 86 kg
| speed = 14.4 km/hr
| carry_capacity = 15 kg
| runtime = 6 hrs
| video_link = https://www.youtube.com/watch?v=20GHG-R9eFI
}}
== Specifications ==
Eve is designed with a single leg and a set of wheels, and grippers for the hands. It has a total of 25 degrees of freedom (DOF): 1 in the neck, 7 in each arm, 6 in the leg, and 1 in each hand and the wheels. It is made mostly out of plastic, aluminum, and fabric. It stands at 183cm tall (the same height as the founder and CEO Bernt Øivind Børnich), weighs 83lbs, and moves at a top speed of 14.4km/h.
The computer runs on an Intel i7 CPU for real-time processing and Nvidia Xavier for AI computations. It uses a custom Linux-based operating system, Java, C++, Python, and many open-source packages.
Eve has 4 hours of operating time on a 1.05-kWh lithium-ion battery pack.<ref>https://robotsguide.com/robots/eve</ref>
== References ==
[[Category:Robots]]
03dbaffeb703b443be13fccce65b90711db41def
MinPPO
0
418
1814
2024-08-20T22:20:49Z
Ben
2
Created page with "These are notes for the MinPPO project [https://github.com/kscalelabs/minppo here]. == Testing == * Hidden layer size of 256 shows progress (loss is based on state.q[2]) * s..."
wikitext
text/x-wiki
These are notes for the MinPPO project [https://github.com/kscalelabs/minppo here].
== Testing ==
* Hidden layer size of 256 shows progress (loss is based on state.q[2])
* setting std to zero makes rewards nans why. I wonder if there NEEDS to be randomization in the enviornment
* ctrl cost is whats giving nans? interesting?
* it is unrelated to randomization of enviornmnet. i think gradient related
* first thing to become nans seems to be actor loss and scores. after that, everything becomes nans
* fixed entropy epsilon. hope this works now.
b4b3cea66b44aa913c98eed1580a14fe113dbd89
1815
1814
2024-08-20T22:21:30Z
Ben
2
wikitext
text/x-wiki
These are notes for the MinPPO project [https://github.com/kscalelabs/minppo here].
== Testing ==
* Hidden layer size of 256 shows progress (loss is based on state.q[2])
* setting std to zero makes rewards nans why. I wonder if there NEEDS to be randomization in the enviornment
* ctrl cost is whats giving nans? interesting?
* it is unrelated to randomization of enviornmnet. i think gradient related
* first thing to become nans seems to be actor loss and scores. after that, everything becomes nans
* fixed entropy epsilon. hope this works now.
== MNIST Training Example ==
<syntaxhighlight lang="python">
"""Trains a simple MNIST model using Equinox."""
import logging
import equinox as eqx
import jax
import jax.numpy as jnp
import optax
from jax import random
from tensorflow.keras.datasets import mnist
logging.basicConfig(
level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", datefmt="%Y-%m-%d %H:%M:%S"
)
logger = logging.getLogger(__name__)
def load_mnist() -> tuple[jnp.ndarray, jnp.ndarray, jnp.ndarray, jnp.ndarray]:
"""Load and preprocess the MNIST dataset."""
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = jnp.float32(x_train) / 255.0
x_test = jnp.float32(x_test) / 255.0
x_train = x_train.reshape(-1, 28 * 28)
x_test = x_test.reshape(-1, 28 * 28)
y_train = jax.nn.one_hot(y_train, 10)
y_test = jax.nn.one_hot(y_test, 10)
return x_train, y_train, x_test, y_test
class DenseModel(eqx.Module):
"""Define a simple dense neural network model."""
layers: list
def __init__(self, key: jnp.ndarray) -> None:
keys = random.split(key, 3)
self.layers = [
eqx.nn.Linear(28 * 28, 128, key=keys[0]),
eqx.nn.Linear(128, 64, key=keys[1]),
eqx.nn.Linear(64, 10, key=keys[2]),
]
def __call__(self, x: jnp.ndarray) -> jnp.ndarray:
for layer in self.layers[:-1]:
x = jax.nn.relu(layer(x))
return self.layers[-1](x)
@eqx.filter_value_and_grad
def loss_fn(model: DenseModel, x_b: jnp.ndarray, y_b: jnp.ndarray) -> jnp.ndarray:
"""Define the loss function (cross-entropy loss). Vecotrized across batch with vmap."""
pred_b = jax.vmap(model)(x_b)
return -jnp.mean(jnp.sum(y_b * jax.nn.log_softmax(pred_b), axis=-1))
@eqx.filter_jit
def make_step(
model: DenseModel,
optimizer: optax.GradientTransformation,
opt_state: optax.OptState,
x_b: jnp.ndarray,
y_b: jnp.ndarray,
) -> tuple[jnp.ndarray, DenseModel, optax.OptState]:
"""Perform a single optimization step."""
loss, grads = loss_fn(model, x_b, y_b)
updates, opt_state = optimizer.update(grads, opt_state)
model = eqx.apply_updates(model, updates)
return loss, model, opt_state
def train(
model: DenseModel,
optimizer: optax.GradientTransformation,
x_train: jnp.ndarray,
y_train: jnp.ndarray,
batch_size: int,
num_epochs: int,
) -> DenseModel:
"""Train the model using the given data."""
opt_state = optimizer.init(eqx.filter(model, eqx.is_array))
for epoch in range(num_epochs):
for i in range(0, len(x_train), batch_size):
x_batch = x_train[i : i + batch_size]
y_batch = y_train[i : i + batch_size]
loss, model, opt_state = make_step(model, optimizer, opt_state, x_batch, y_batch)
logger.info(f"Epoch {epoch+1}/{num_epochs}, Loss: {loss:.4f}")
return model
@eqx.filter_jit
def accuracy(model: DenseModel, x_b: jnp.ndarray, y_b: jnp.ndarray) -> jnp.ndarray:
"""Takes in batch oftest images/label pairing with model and returns accuracy."""
pred = jax.vmap(model)(x_b)
return jnp.mean(jnp.argmax(pred, axis=-1) == jnp.argmax(y_b, axis=-1))
def main() -> None:
# Load data
x_train, y_train, x_test, y_test = load_mnist()
# Initialize model and optimizer
key = random.PRNGKey(0)
model = DenseModel(key)
optimizer = optax.adam(learning_rate=1e-3)
# Train the model
batch_size = 32
num_epochs = 10
trained_model = train(model, optimizer, x_train, y_train, batch_size, num_epochs)
test_accuracy = accuracy(trained_model, x_test, y_test)
logger.info(f"Test accuracy: {test_accuracy:.4f}")
if __name__ == "__main__":
main()
</syntaxhighlight>
d52e06e64c08148fb8e6bc4c0c63f68855f3a062
XMC4800
0
419
1816
2024-08-23T02:24:36Z
Goblinrum
25
Created page with "For controlling multiple CAN buses, our solution is to use the Infineon XMC4800, a MCU that integrates a M4 core and has both an EtherCAT node and 6x CAN nodes. This allows us..."
wikitext
text/x-wiki
For controlling multiple CAN buses, our solution is to use the Infineon XMC4800, a MCU that integrates a M4 core and has both an EtherCAT node and 6x CAN nodes. This allows us to control the CAN buses through either FS USB or (in the future) EtherCAT.
==== Testing the Evaluation Board ====
=== Programming the MCU ===
==== CAN example code ====
=== Integrated XMC4800 CAN board ===
TBD
[[Category:Guides]]
[[Category:Electronics]]
[[Category:Hardware]]
577a14779bb1e63c176440312861ded7a0301c28
1818
1816
2024-08-23T02:37:02Z
Goblinrum
25
wikitext
text/x-wiki
For controlling multiple CAN buses, our solution is to use the Infineon XMC4800, a MCU that integrates a M4 core and has both an EtherCAT node and 6x CAN nodes. This allows us to control the CAN buses through either FS USB or (in the future) EtherCAT.
* [https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/ Product Page]
* [https://www.infineon.com/dgdl/Infineon-XMC4700-XMC4800-ReferenceManual-v01_02-EN.pdf?fileId=5546d46250cc1fdf01513f8e052d07fc Eval Board Reference Manual]
* [https://www.infineon.com/dgdl/Infineon-XMC4700-XMC4800-DataSheet-v01_02-EN.pdf?fileId=5546d462518ffd850151908ea8db00b3 Data Sheet]
* [https://www.infineon.com/dgdl/Infineon-xmc4800-UM-v01_00-EN.pdf?fileId=5546d4624d6fc3d5014ddd85f6d97832 MCU Reference Manual]
Most technical implementation details are in the Reference Manual
==== Testing the Evaluation Board ====
[[File:XMC4800 Breadboard.png|thumb]]
The testing setup is connected as the image describes. The eval board has one built-in CAN transceiver (while the rest of them are using external SN65HVD230 modules). The one built-in transceiver is connected to the other four CAN buses as a debug RX path
=== Programming the MCU ===
Follow the instructions [https://github.com/Infineon/mtb-example-xmc-can-loopback here] for an example on how to program the XMC4800 chip. The main takeaway is
# You need the J-SEGGER driver for your computer
# Connect via the DEBUG micro-B port, not the other USB port
# Use ModusToolBox to get started with the code examples
==== CAN example code ====
@Tom + @Ved(?) TODO
=== Integrated XMC4800 CAN board ===
TBD
[[Category:Guides]]
[[Category:Electronics]]
[[Category:Hardware]]
8fdb46f03e876bcd31135af35d3988e7d991b4f9
1819
1818
2024-08-23T03:07:20Z
Goblinrum
25
/* Testing the Evaluation Board */
wikitext
text/x-wiki
For controlling multiple CAN buses, our solution is to use the Infineon XMC4800, a MCU that integrates a M4 core and has both an EtherCAT node and 6x CAN nodes. This allows us to control the CAN buses through either FS USB or (in the future) EtherCAT.
* [https://www.infineon.com/cms/en/product/microcontroller/32-bit-industrial-microcontroller-based-on-arm-cortex-m/32-bit-xmc4000-industrial-microcontroller-arm-cortex-m4/xmc4800/ Product Page]
* [https://www.infineon.com/dgdl/Infineon-XMC4700-XMC4800-ReferenceManual-v01_02-EN.pdf?fileId=5546d46250cc1fdf01513f8e052d07fc Eval Board Reference Manual]
* [https://www.infineon.com/dgdl/Infineon-XMC4700-XMC4800-DataSheet-v01_02-EN.pdf?fileId=5546d462518ffd850151908ea8db00b3 Data Sheet]
* [https://www.infineon.com/dgdl/Infineon-xmc4800-UM-v01_00-EN.pdf?fileId=5546d4624d6fc3d5014ddd85f6d97832 MCU Reference Manual]
Most technical implementation details are in the Reference Manual
==== Testing the Evaluation Board ====
[[File:XMC4800 Breadboard.png|thumb]]
The testing setup is connected as the image describes. The eval board has one built-in CAN transceiver (while the rest of them are using external SN65HVD230 modules). The one built-in transceiver is connected to the other four CAN buses as a debug RX path. This should be disconnected for testing otherwise since the built in bus is only meant to do basic loopback-style testing
=== Programming the MCU ===
Follow the instructions [https://github.com/Infineon/mtb-example-xmc-can-loopback here] for an example on how to program the XMC4800 chip. The main takeaway is
# You need the J-SEGGER driver for your computer
# Connect via the DEBUG micro-B port, not the other USB port
# Use ModusToolBox to get started with the code examples
==== CAN example code ====
@Tom + @Ved(?) TODO
=== Integrated XMC4800 CAN board ===
TBD
[[Category:Guides]]
[[Category:Electronics]]
[[Category:Hardware]]
8de28cb3c91934dd4fdfda548acd794953d98857
File:XMC4800 Breadboard.png
6
420
1817
2024-08-23T02:29:56Z
Goblinrum
25
wikitext
text/x-wiki
Breadboarded wiring for testing the Eval Board
22d38301f338178e885878506039851c72b1e304
Nvidia Jetson: Flashing Custom Firmware
0
315
1820
1541
2024-08-28T06:29:58Z
Vedant
24
wikitext
text/x-wiki
= Flashing Standard Firmware =
== SDKManager ==
SDKManager is available only on Linux, and can be installed here: https://developer.nvidia.com/sdk-manager
# Start up the SDKManager
# Put the Jetson into recovery mode. For the AGX, this can be done by pressing the recovery button while powering on the device. For the Nano and NX, however, a jumper will be required.
# Connect the Target Jetson to the host device and ensure that the target device is recognized.
# Follow the instructions on the application, choosing configurations as necessary.
= Flashing Custom Firmware (For Jetson 36.3) =
# Repeat Steps 1 to 3 as mentioned in Flashing Standard Firmware.
# Proceed to the second step of the SDKManager, where the respective individual dependencies and Jetson Images are listed and are to be installed. Proceed with the installation.
# When prompted to actually flash the Jetson, opt to skip. This will install the <code>nvidia</code> folder on your home directory, in which the <code>rootfs</code>, <code>kernel,</code> and <code>bootloader</code> are located.
# Navigate to <code>nvidia</code> and <code>cd</code> through its subdirectories, until <code>Linux for Tegra</code> is reached.
# Inside <code>Linux for Tegra</code>, <code>cd</code> into the <code>sources</code> folder. It should be unpopulated with the exception of some bash scripts. Run the <code>source_sync.sh</code> script and when asked to specify the release of the downloadable sources, enter <code>jetson_36.3</code>. This will install the
f13369375fc8dccd2653e361090b54e0e2bf63e4
1821
1820
2024-08-28T06:44:43Z
Vedant
24
/* Flashing Custom Firmware (For Jetson 36.3) */
wikitext
text/x-wiki
= Flashing Standard Firmware =
== SDKManager ==
SDKManager is available only on Linux, and can be installed here: https://developer.nvidia.com/sdk-manager
# Start up the SDKManager
# Put the Jetson into recovery mode. For the AGX, this can be done by pressing the recovery button while powering on the device. For the Nano and NX, however, a jumper will be required.
# Connect the Target Jetson to the host device and ensure that the target device is recognized.
# Follow the instructions on the application, choosing configurations as necessary.
= Flashing Custom Firmware (For Jetson 36.3) =
== Pre-requisites ==
# Please install required packages with the command <code>sudo apt install build-essential bc && sudo apt install build-essential bc</code>.
== Downloading the Toolchain ==
== Downloading the Kernel ==
# Follow steps 1 to 3 as mentioned in Flashing Standard Firmware.
# Proceed to the second step of the SDKManager, where the respective individual dependencies and Jetson Images are listed and are to be installed. Proceed with the installation.
# When prompted to actually flash the Jetson, opt to skip. This will install the <code>nvidia</code> folder on your home directory, in which the <code>rootfs</code>, <code>kernel</code>, and <code>bootloader</code> are located.
# Navigate to <code>nvidia</code> and <code>cd</code> through its subdirectories, until <code>Linux for Tegra</code> is reached.
# Inside <code>Linux for Tegra</code>, <code>cd</code> into the <code>sources</code> folder. It should be unpopulated with the exception of some bash scripts. Run the <code>source_sync.sh</code> script and when asked to specify the release tag of the downloadable sources, enter <code>jetson_36.3</code>. This will install the sources for the respective Jetson version as necessary. To find the release tag of future iterations of the Jetson firmware, please refer to its respective release notes.
# Once sources have been synced, the <code>sources</code> directory should now be populated with the required files.
8b8cccf6a95d78bdf3bb3a140bc451ad81010a5f
1822
1821
2024-08-28T06:49:30Z
Vedant
24
/* Downloading the Toolchain */
wikitext
text/x-wiki
= Flashing Standard Firmware =
== SDKManager ==
SDKManager is available only on Linux, and can be installed here: https://developer.nvidia.com/sdk-manager
# Start up the SDKManager
# Put the Jetson into recovery mode. For the AGX, this can be done by pressing the recovery button while powering on the device. For the Nano and NX, however, a jumper will be required.
# Connect the Target Jetson to the host device and ensure that the target device is recognized.
# Follow the instructions on the application, choosing configurations as necessary.
= Flashing Custom Firmware (For Jetson 36.3) =
== Pre-requisites ==
# Please install required packages with the command <code>sudo apt install build-essential bc && sudo apt install build-essential bc</code>.
=== Downloading the Toolchain ===
# Download the Toolchain binaries located in <code>https://developer.nvidia.com/embedded/jetson-linux</code>.
# From there, <code>mkdir $HOME/l4t-gcc</code>, <code>cd $HOME/l4t-gcc</code> and extract the installed toolchain into this newly created directory using the <code>tar</code> command.
== Downloading the Kernel ==
# Follow steps 1 to 3 as mentioned in Flashing Standard Firmware.
# Proceed to the second step of the SDKManager, where the respective individual dependencies and Jetson Images are listed and are to be installed. Proceed with the installation.
# When prompted to actually flash the Jetson, opt to skip. This will install the <code>nvidia</code> folder on your home directory, in which the <code>rootfs</code>, <code>kernel</code>, and <code>bootloader</code> are located.
# Navigate to <code>nvidia</code> and <code>cd</code> through its subdirectories, until <code>Linux for Tegra</code> is reached.
# Inside <code>Linux for Tegra</code>, <code>cd</code> into the <code>sources</code> folder. It should be unpopulated with the exception of some bash scripts. Run the <code>source_sync.sh</code> script and when asked to specify the release tag of the downloadable sources, enter <code>jetson_36.3</code>. This will install the sources for the respective Jetson version as necessary. To find the release tag of future iterations of the Jetson firmware, please refer to its respective release notes.
# Once sources have been synced, the <code>sources</code> directory should now be populated with the required files.
0dec29ec46946ccbe62ce4e6a3501dd2786580b9
1823
1822
2024-08-28T18:45:54Z
Vedant
24
wikitext
text/x-wiki
= Flashing Standard Firmware =
== SDKManager ==
SDKManager is available only on Linux, and can be installed here: https://developer.nvidia.com/sdk-manager
# Start up the SDKManager
# Put the Jetson into recovery mode. For the AGX, this can be done by pressing the recovery button while powering on the device. For the Nano and NX, however, a jumper will be required.
# Connect the Target Jetson to the host device and ensure that the target device is recognized.
# Follow the instructions on the application, choosing configurations as necessary.
= Flashing Custom Firmware (For Jetson 36.3) =
== Pre-requisites ==
# Please install required packages with the command <code>sudo apt install build-essential bc && sudo apt install build-essential bc</code>.
=== Downloading the Toolchain ===
# Download the Toolchain binaries located in <code>https://developer.nvidia.com/embedded/jetson-linux</code>.
# From there, <code>mkdir $HOME/l4t-gcc</code>, <code>cd $HOME/l4t-gcc</code> and extract the installed toolchain into this newly created directory using the <code>tar</code> command.
== Downloading the Kernel ==
# Follow steps 1 to 3 as mentioned in Flashing Standard Firmware.
# Proceed to the second step of the SDKManager, where the respective individual dependencies and Jetson Images are listed and are to be installed. Proceed with the installation.
# When prompted to actually flash the Jetson, opt to skip. This will install the <code>nvidia</code> folder on your home directory, in which the <code>rootfs</code>, <code>kernel</code>, and <code>bootloader</code> are located.
# Navigate to <code>nvidia</code> and <code>cd</code> through its subdirectories, until <code>Linux for Tegra</code> is reached.
# Inside <code>Linux for Tegra</code>, <code>cd</code> into the <code>sources</code> folder. It should be unpopulated with the exception of some bash scripts. Run the <code>source_sync.sh</code> script and when asked to specify the release tag of the downloadable sources, enter <code>jetson_36.3</code>. This will install the sources for the respective Jetson version as necessary. To find the release tag of future iterations of the Jetson firmware, please refer to its respective release notes.
# Once sources have been synced, the <code>sources</code> directory should now be populated with the required files.
== Customizing Kernel ==
# Within <code>source</code>, enter the
3e90ac814bb42582c504f56e8e7a7b985939a726
1826
1823
2024-08-29T22:42:52Z
Vedant
24
wikitext
text/x-wiki
= Flashing Standard Firmware =
== SDKManager ==
SDKManager is available only on Linux, and can be installed here: https://developer.nvidia.com/sdk-manager
# Start up the SDKManager
# Put the Jetson into recovery mode. For the AGX, this can be done by pressing the recovery button while powering on the device. For the Nano and NX, however, a jumper will be required.
# Connect the Target Jetson to the host device and ensure that the target device is recognized.
# Follow the instructions on the application, choosing configurations as necessary.
= Flashing Custom Firmware (For Jetson 36.3) =
== Pre-requisites ==
# Please install required packages with the command <code>sudo apt install build-essential bc && sudo apt install build-essential bc</code>.
=== Downloading the Toolchain ===
# Download the Toolchain binaries located in <code>https://developer.nvidia.com/embedded/jetson-linux</code>.
# From there, <code>mkdir $HOME/l4t-gcc</code>, <code>cd $HOME/l4t-gcc</code> and extract the installed toolchain into this newly created directory using the <code>tar</code> command.
== Downloading the Kernel ==
# Follow steps 1 to 3 as mentioned in Flashing Standard Firmware.
# Proceed to the second step of the SDKManager, where the respective individual dependencies and Jetson Images are listed and are to be installed. Proceed with the installation.
# When prompted to actually flash the Jetson, opt to skip. This will install the <code>nvidia</code> folder on your home directory, in which the <code>rootfs</code>, <code>kernel</code>, and <code>bootloader</code> are located.
# Navigate to <code>nvidia</code> and <code>cd</code> through its subdirectories, until <code>Linux for Tegra</code> is reached.
# Inside <code>Linux for Tegra</code>, <code>cd</code> into the <code>sources</code> folder. It should be unpopulated with the exception of some bash scripts. Run the <code>source_sync.sh</code> script and when asked to specify the release tag of the downloadable sources, enter <code>jetson_36.3</code>. This will install the sources for the respective Jetson version as necessary. To find the release tag of future iterations of the Jetson firmware, please refer to its respective release notes.
# Once sources have been synced, the <code>sources</code> directory should now be populated with the required files.
== Customizing Kernel ==
# Within <code>source</code>, enter the <code>kernel</code> eventually navigate to the <code>kernel-jammy-src</code> folder and run <code>make menuconfig ARCH=arm64</code>. This will bring up a UI with configurable drivers and peripherals. Select desired configurations and save.
# The configurations can be found within a <code>.config</code> file located within the same directory. Copy the contents and locate the <code>defconfig</code> file in <code>./arch/arm64/configs/</code>, overwriting it with the copied contents.
#
ccedfbdf13919f720e9dc4ecb69e936fb102c363
1827
1826
2024-08-29T23:04:43Z
Vedant
24
wikitext
text/x-wiki
= Flashing Standard Firmware =
== SDKManager ==
SDKManager is available only on Linux, and can be installed here: https://developer.nvidia.com/sdk-manager
# Start up the SDKManager
# Put the Jetson into recovery mode. For the AGX, this can be done by pressing the recovery button while powering on the device. For the Nano and NX, however, a jumper will be required.
# Connect the Target Jetson to the host device and ensure that the target device is recognized.
# Follow the instructions on the application, choosing configurations as necessary.
= Flashing Custom Firmware (For Jetson 36.3) =
== Pre-requisites ==
# Please install required packages with the command <code>sudo apt install build-essential bc && sudo apt install build-essential bc</code>.
=== Downloading the Toolchain ===
# Download the Toolchain binaries located in <code>https://developer.nvidia.com/embedded/jetson-linux</code>.
# From there, <code>mkdir $HOME/l4t-gcc</code>, <code>cd $HOME/l4t-gcc</code> and extract the installed toolchain into this newly created directory using the <code>tar</code> command.
== Downloading the Kernel ==
# Follow steps 1 to 3 as mentioned in Flashing Standard Firmware.
# Proceed to the second step of the SDKManager, where the respective individual dependencies and Jetson Images are listed and are to be installed. Proceed with the installation.
# When prompted to actually flash the Jetson, opt to skip. This will install the <code>nvidia</code> folder on your home directory, in which the <code>rootfs</code>, <code>kernel</code>, and <code>bootloader</code> are located.
# Navigate to <code>nvidia</code> and <code>cd</code> through its subdirectories, until <code>Linux for Tegra</code> is reached.
# Inside <code>Linux for Tegra</code>, <code>cd</code> into the <code>sources</code> folder. It should be unpopulated with the exception of some bash scripts. Run the <code>source_sync.sh</code> script and when asked to specify the release tag of the downloadable sources, enter <code>jetson_36.3</code>. This will install the sources for the respective Jetson version as necessary. To find the release tag of future iterations of the Jetson firmware, please refer to its respective release notes.
# Once sources have been synced, the <code>sources</code> directory should now be populated with the required files.
== Customizing Kernel ==
# Within <code>source</code>, enter the <code>kernel</code> eventually navigate to the <code>kernel-jammy-src</code> folder and run <code>make menuconfig ARCH=arm64</code>. This will bring up a UI with configurable drivers and peripherals. Select desired configurations and save.
# The configurations can be found within a <code>.config</code> file located within the same directory. Copy the contents and locate the <code>defconfig</code> file in <code>./arch/arm64/configs/</code>, overwriting it with the copied contents.
== Building Custom Kernel and Installing Modules ==
# Navigate back out to <code>sources</code>.
# Define the Cross-compilation toolchain with the commands <code>export CROSS_COMPILE=<toolchain-path>/bin/aarch64-buildroot-linux-gnu-</code>. If installation was done correctly as per the pre-requisites section, the command <code>export CROSS_COMPILE=$HOME/l4t-gcc/aarch64--glibc--stable-2022.08-1/bin/aarch64-buildroot-linux-gnu-</code> should work.
# Define the Cross-compilation toolchain with the commands <code>export CROSS_COMPILE_AARCH64_PATH=</code>, and <code>export CROSS_COMPILE_AARCH64=/bin/aarch64-buildroot-linux-gnu-</code>. (Potentially deprecated)
# Inside the sources, directory, make an output directory for built kernel files using <code>mkdir kernel_out</code>.
# Build the modules using the command <code>./nvbuild.sh -o kernel_out</code>.
#
2ddb0f02245307be74e80d8038f10deff16ce282
1828
1827
2024-08-30T19:42:22Z
Vedant
24
wikitext
text/x-wiki
= Flashing Standard Firmware =
== SDKManager ==
SDKManager is available only on Linux, and can be installed here: <code>https://developer.nvidia.com/sdk-manager</code>
# Start up the SDKManager
# Put the Jetson into recovery mode. For the AGX, this can be done by pressing the recovery button while powering on the device. For the Nano and NX, however, a jumper will be required.
# Connect the Target Jetson to the host device and ensure that the target device is recognized.
# Follow the instructions on the application, choosing configurations as necessary.
= Flashing Custom Firmware (For Jetson 36.3) =
== Pre-requisites ==
# Please install required packages with the command <code>sudo apt install build-essential bc && sudo apt install build-essential bc</code>.
=== Downloading the Toolchain ===
# Download the Toolchain binaries located in <code>https://developer.nvidia.com/embedded/jetson-linux</code>.
# From there, <code>mkdir $HOME/l4t-gcc</code>, <code>cd $HOME/l4t-gcc</code> and extract the installed toolchain into this newly created directory using the <code>tar</code> command.
== Downloading the Kernel ==
# Follow steps 1 to 3 as mentioned in Flashing Standard Firmware.
# Proceed to the second step of the SDKManager, where the respective individual dependencies and Jetson Images are listed and are to be installed. Proceed with the installation.
# When prompted to actually flash the Jetson, opt to skip. This will install the <code>nvidia</code> folder on your home directory, in which the <code>rootfs</code>, <code>kernel</code>, and <code>bootloader</code> are located.
# Navigate to <code>nvidia</code> and <code>cd</code> through its subdirectories, until <code>Linux for Tegra</code> is reached.
# Inside <code>Linux for Tegra</code>, <code>cd</code> into the <code>sources</code> folder. It should be unpopulated with the exception of some bash scripts. Run the <code>source_sync.sh</code> script and when asked to specify the release tag of the downloadable sources, enter <code>jetson_36.3</code>. This will install the sources for the respective Jetson version as necessary. To find the release tag of future iterations of the Jetson firmware, please refer to its respective release notes.
# Once sources have been synced, the <code>sources</code> directory should now be populated with the required files.
== Customizing Kernel ==
# Within <code>source</code>, enter the <code>kernel</code> eventually navigate to the <code>kernel-jammy-src</code> folder and run <code>make menuconfig ARCH=arm64</code>. This will bring up a UI with configurable drivers and peripherals. Select desired configurations and save.
# The configurations can be found within a <code>.config</code> file located within the same directory. Copy the contents and locate the <code>defconfig</code> file in <code>./arch/arm64/configs/</code>, overwriting it with the copied contents.
== Building Custom Kernel and Installing Modules ==
# Navigate back out to <code>sources</code>.
# Define the Cross-compilation toolchain with the commands <code>export CROSS_COMPILE=<toolchain-path>/bin/aarch64-buildroot-linux-gnu-</code>. If installation was done correctly as per the pre-requisites section, the command <code>export CROSS_COMPILE=$HOME/l4t-gcc/aarch64--glibc--stable-2022.08-1/bin/aarch64-buildroot-linux-gnu-</code> should work.
# Define the Cross-compilation toolchain with the commands <code>export CROSS_COMPILE_AARCH64_PATH=</code>, and <code>export CROSS_COMPILE_AARCH64=/bin/aarch64-buildroot-linux-gnu-</code>. (Potentially deprecated)
# Inside the sources, directory, make an output directory for built kernel files using <code>mkdir kernel_out</code>.
# Build the modules using the command <code>./nvbuild.sh -o kernel_out</code>. This will compile the drivers and device trees for the new kernel.
# Navigate out from the <code>sources</code> directory into the <code>Linux for Tegra</code>.
# Use the <code>cp</code> to overwrite <code>./rootfs/usr/lib/modules/5.15.136-tegra/updates/nvgpu.ko</code> with <code>./source/kernel_out/nvgpu/drivers/gpu/nvgpu/nvgpu.ko</code>.
# Repeat the previous step to replace <code>Linux_for_Tegra/kernel/dtb/</code> with
# To specify the installation path for the compiled modules, use the command <code>export INSTALL_MOD_PATH=$HOME/nvidia/nvidia_sdk/JetPack_6.0_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra/rootfs/</code>.
# Install modules using the command <code>./nvbuild -i</code>.
== Flashing the Kernel ==
# Ensure that
c7fe2c7630ebe3f9ee5e661d97fb8788aac82c50
1829
1828
2024-08-30T19:54:28Z
Vedant
24
wikitext
text/x-wiki
= Flashing Standard Firmware =
== SDKManager ==
SDKManager is available only on Linux, and can be installed here: <code>https://developer.nvidia.com/sdk-manager</code>
# Start up the SDKManager
# Put the Jetson into recovery mode. For the AGX, this can be done by pressing the recovery button while powering on the device. For the Nano and NX, however, a jumper will be required.
# Connect the Target Jetson to the host device and ensure that the target device is recognized.
# Follow the instructions on the application, choosing configurations as necessary.
= Flashing Custom Firmware (For Jetson 36.3) =
== Pre-requisites ==
# Please install required packages with the command <code>sudo apt install build-essential bc && sudo apt install build-essential bc</code>.
=== Downloading the Toolchain ===
# Download the Toolchain binaries located in <code>https://developer.nvidia.com/embedded/jetson-linux</code>.
# From there, <code>mkdir $HOME/l4t-gcc</code>, <code>cd $HOME/l4t-gcc</code> and extract the installed toolchain into this newly created directory using the <code>tar</code> command.
== Downloading the Kernel ==
# Follow steps 1 to 3 as mentioned in Flashing Standard Firmware.
# Proceed to the second step of the SDKManager, where the respective individual dependencies and Jetson Images are listed and are to be installed. Proceed with the installation.
# When prompted to actually flash the Jetson, opt to skip. This will install the <code>nvidia</code> folder on your home directory, in which the <code>rootfs</code>, <code>kernel</code>, and <code>bootloader</code> are located.
# Navigate to <code>nvidia</code> and <code>cd</code> through its subdirectories, until <code>Linux for Tegra</code> is reached.
# Inside <code>Linux for Tegra</code>, <code>cd</code> into the <code>sources</code> folder. It should be unpopulated with the exception of some bash scripts. Run the <code>source_sync.sh</code> script and when asked to specify the release tag of the downloadable sources, enter <code>jetson_36.3</code>. This will install the sources for the respective Jetson version as necessary. To find the release tag of future iterations of the Jetson firmware, please refer to its respective release notes.
# Once sources have been synced, the <code>sources</code> directory should now be populated with the required files.
== Customizing Kernel ==
# Within <code>source</code>, enter the <code>kernel</code> eventually navigate to the <code>kernel-jammy-src</code> folder and run <code>make menuconfig ARCH=arm64</code>. This will bring up a UI with configurable drivers and peripherals. Select desired configurations and save.
# The configurations can be found within a <code>.config</code> file located within the same directory. Copy the contents and locate the <code>defconfig</code> file in <code>./arch/arm64/configs/</code>, overwriting it with the copied contents.
== Building Custom Kernel and Installing Modules ==
# Navigate back out to <code>sources</code>.
# Define the Cross-compilation toolchain with the commands <code>export CROSS_COMPILE=<toolchain-path>/bin/aarch64-buildroot-linux-gnu-</code>. If installation was done correctly as per the pre-requisites section, the command <code>export CROSS_COMPILE=$HOME/l4t-gcc/aarch64--glibc--stable-2022.08-1/bin/aarch64-buildroot-linux-gnu-</code> should work.
# Define the Cross-compilation toolchain with the commands <code>export CROSS_COMPILE_AARCH64_PATH=</code>, and <code>export CROSS_COMPILE_AARCH64=/bin/aarch64-buildroot-linux-gnu-</code>. (Potentially deprecated)
# Inside the sources, directory, make an output directory for built kernel files using <code>mkdir kernel_out</code>.
# Build the modules using the command <code>./nvbuild.sh -o kernel_out</code>. This will compile the drivers and device trees for the new kernel.
# Navigate out from the <code>sources</code> directory into the <code>Linux for Tegra</code>.
# Use the <code>cp</code> to overwrite <code>./rootfs/usr/lib/modules/5.15.136-tegra/updates/nvgpu.ko</code> with <code>./source/kernel_out/nvgpu/drivers/gpu/nvgpu/nvgpu.ko</code>.
# Repeat the previous step to replace <code>Linux_for_Tegra/kernel/dtb/</code> with </code>source/kernel_out/kernel/kernel-jammy-src/arch/arm64/boot/dts/nvidia</code>. Ensure that instead of overwriting the directory, only the files are copied over.
# Overwrite the Image in <code>./kernel</code> with <code>./source/kernel_out/kernel/kernel-jammy-src/arch/arm64/boot/Image
</code>.
# To specify the installation path for the compiled modules, use the command <code>export INSTALL_MOD_PATH=$HOME/nvidia/nvidia_sdk/JetPack_6.0_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra/rootfs/</code>.
# Install modules using the command <code>./nvbuild -i</code>. The Jetson is now ready to be flashed.
== Flashing the Kernel ==
Ensure that the target Jetson is connected to the host device and is in recovery mode. Navigate to the <code>Linux for Tegra</code> directory and run <code>sudo ./nvsdkmanager_flash.sh</code>. When prompted, disconnect the Jetson from host device and allow it to boot. Congratulations, you have successfully flashed your Jetson with custom firmware.
0a866bc6bfccea12e15b2b9c202120e3f41e03ff
1830
1829
2024-08-30T19:55:57Z
Vedant
24
/* Building Custom Kernel and Installing Modules */
wikitext
text/x-wiki
= Flashing Standard Firmware =
== SDKManager ==
SDKManager is available only on Linux, and can be installed here: <code>https://developer.nvidia.com/sdk-manager</code>
# Start up the SDKManager
# Put the Jetson into recovery mode. For the AGX, this can be done by pressing the recovery button while powering on the device. For the Nano and NX, however, a jumper will be required.
# Connect the Target Jetson to the host device and ensure that the target device is recognized.
# Follow the instructions on the application, choosing configurations as necessary.
= Flashing Custom Firmware (For Jetson 36.3) =
== Pre-requisites ==
# Please install required packages with the command <code>sudo apt install build-essential bc && sudo apt install build-essential bc</code>.
=== Downloading the Toolchain ===
# Download the Toolchain binaries located in <code>https://developer.nvidia.com/embedded/jetson-linux</code>.
# From there, <code>mkdir $HOME/l4t-gcc</code>, <code>cd $HOME/l4t-gcc</code> and extract the installed toolchain into this newly created directory using the <code>tar</code> command.
== Downloading the Kernel ==
# Follow steps 1 to 3 as mentioned in Flashing Standard Firmware.
# Proceed to the second step of the SDKManager, where the respective individual dependencies and Jetson Images are listed and are to be installed. Proceed with the installation.
# When prompted to actually flash the Jetson, opt to skip. This will install the <code>nvidia</code> folder on your home directory, in which the <code>rootfs</code>, <code>kernel</code>, and <code>bootloader</code> are located.
# Navigate to <code>nvidia</code> and <code>cd</code> through its subdirectories, until <code>Linux for Tegra</code> is reached.
# Inside <code>Linux for Tegra</code>, <code>cd</code> into the <code>sources</code> folder. It should be unpopulated with the exception of some bash scripts. Run the <code>source_sync.sh</code> script and when asked to specify the release tag of the downloadable sources, enter <code>jetson_36.3</code>. This will install the sources for the respective Jetson version as necessary. To find the release tag of future iterations of the Jetson firmware, please refer to its respective release notes.
# Once sources have been synced, the <code>sources</code> directory should now be populated with the required files.
== Customizing Kernel ==
# Within <code>source</code>, enter the <code>kernel</code> eventually navigate to the <code>kernel-jammy-src</code> folder and run <code>make menuconfig ARCH=arm64</code>. This will bring up a UI with configurable drivers and peripherals. Select desired configurations and save.
# The configurations can be found within a <code>.config</code> file located within the same directory. Copy the contents and locate the <code>defconfig</code> file in <code>./arch/arm64/configs/</code>, overwriting it with the copied contents.
== Building Custom Kernel and Installing Modules ==
# Navigate back out to <code>sources</code>.
# Define the Cross-compilation toolchain with the commands <code>export CROSS_COMPILE=<toolchain-path>/bin/aarch64-buildroot-linux-gnu-</code>. If installation was done correctly as per the pre-requisites section, the command <code>export CROSS_COMPILE=$HOME/l4t-gcc/aarch64--glibc--stable-2022.08-1/bin/aarch64-buildroot-linux-gnu-</code> should work.
# Define the Cross-compilation toolchain with the commands <code>export CROSS_COMPILE_AARCH64_PATH=</code>, and <code>export CROSS_COMPILE_AARCH64=/bin/aarch64-buildroot-linux-gnu-</code>. (Potentially deprecated)
# Inside the sources, directory, make an output directory for built kernel files using <code>mkdir kernel_out</code>.
# Build the modules using the command <code>./nvbuild.sh -o kernel_out</code>. This will compile the drivers and device trees for the new kernel.
# Navigate out from the <code>sources</code> directory into the <code>Linux for Tegra</code>.
# Use the <code>cp</code> to overwrite <code>./rootfs/usr/lib/modules/5.15.136-tegra/updates/nvgpu.ko</code> with <code>./source/kernel_out/nvgpu/drivers/gpu/nvgpu/nvgpu.ko</code>.
# Repeat the previous step to replace <code>Linux_for_Tegra/kernel/dtb/</code> with </code>source/kernel_out/kernel/kernel-jammy-src/arch/arm64/boot/dts/nvidia</code>. Ensure that instead of overwriting the directory, only the files are copied over.
# Overwrite the Image file in <code>./kernel</code> with <code>./source/kernel_out/kernel/kernel-jammy-src/arch/arm64/boot/Image
</code>.
# To specify the installation path for the compiled modules, use the command <code>export INSTALL_MOD_PATH=$HOME/nvidia/nvidia_sdk/JetPack_6.0_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra/rootfs/</code>.
# Install modules using the command <code>./nvbuild -i</code>. The Jetson is now ready to be flashed.
== Flashing the Kernel ==
Ensure that the target Jetson is connected to the host device and is in recovery mode. Navigate to the <code>Linux for Tegra</code> directory and run <code>sudo ./nvsdkmanager_flash.sh</code>. When prompted, disconnect the Jetson from host device and allow it to boot. Congratulations, you have successfully flashed your Jetson with custom firmware.
8c848c263b78b63b0e05a2211cc99ba3f2f27518
1831
1830
2024-08-30T19:56:24Z
Vedant
24
/* Flashing the Kernel */
wikitext
text/x-wiki
= Flashing Standard Firmware =
== SDKManager ==
SDKManager is available only on Linux, and can be installed here: <code>https://developer.nvidia.com/sdk-manager</code>
# Start up the SDKManager
# Put the Jetson into recovery mode. For the AGX, this can be done by pressing the recovery button while powering on the device. For the Nano and NX, however, a jumper will be required.
# Connect the Target Jetson to the host device and ensure that the target device is recognized.
# Follow the instructions on the application, choosing configurations as necessary.
= Flashing Custom Firmware (For Jetson 36.3) =
== Pre-requisites ==
# Please install required packages with the command <code>sudo apt install build-essential bc && sudo apt install build-essential bc</code>.
=== Downloading the Toolchain ===
# Download the Toolchain binaries located in <code>https://developer.nvidia.com/embedded/jetson-linux</code>.
# From there, <code>mkdir $HOME/l4t-gcc</code>, <code>cd $HOME/l4t-gcc</code> and extract the installed toolchain into this newly created directory using the <code>tar</code> command.
== Downloading the Kernel ==
# Follow steps 1 to 3 as mentioned in Flashing Standard Firmware.
# Proceed to the second step of the SDKManager, where the respective individual dependencies and Jetson Images are listed and are to be installed. Proceed with the installation.
# When prompted to actually flash the Jetson, opt to skip. This will install the <code>nvidia</code> folder on your home directory, in which the <code>rootfs</code>, <code>kernel</code>, and <code>bootloader</code> are located.
# Navigate to <code>nvidia</code> and <code>cd</code> through its subdirectories, until <code>Linux for Tegra</code> is reached.
# Inside <code>Linux for Tegra</code>, <code>cd</code> into the <code>sources</code> folder. It should be unpopulated with the exception of some bash scripts. Run the <code>source_sync.sh</code> script and when asked to specify the release tag of the downloadable sources, enter <code>jetson_36.3</code>. This will install the sources for the respective Jetson version as necessary. To find the release tag of future iterations of the Jetson firmware, please refer to its respective release notes.
# Once sources have been synced, the <code>sources</code> directory should now be populated with the required files.
== Customizing Kernel ==
# Within <code>source</code>, enter the <code>kernel</code> eventually navigate to the <code>kernel-jammy-src</code> folder and run <code>make menuconfig ARCH=arm64</code>. This will bring up a UI with configurable drivers and peripherals. Select desired configurations and save.
# The configurations can be found within a <code>.config</code> file located within the same directory. Copy the contents and locate the <code>defconfig</code> file in <code>./arch/arm64/configs/</code>, overwriting it with the copied contents.
== Building Custom Kernel and Installing Modules ==
# Navigate back out to <code>sources</code>.
# Define the Cross-compilation toolchain with the commands <code>export CROSS_COMPILE=<toolchain-path>/bin/aarch64-buildroot-linux-gnu-</code>. If installation was done correctly as per the pre-requisites section, the command <code>export CROSS_COMPILE=$HOME/l4t-gcc/aarch64--glibc--stable-2022.08-1/bin/aarch64-buildroot-linux-gnu-</code> should work.
# Define the Cross-compilation toolchain with the commands <code>export CROSS_COMPILE_AARCH64_PATH=</code>, and <code>export CROSS_COMPILE_AARCH64=/bin/aarch64-buildroot-linux-gnu-</code>. (Potentially deprecated)
# Inside the sources, directory, make an output directory for built kernel files using <code>mkdir kernel_out</code>.
# Build the modules using the command <code>./nvbuild.sh -o kernel_out</code>. This will compile the drivers and device trees for the new kernel.
# Navigate out from the <code>sources</code> directory into the <code>Linux for Tegra</code>.
# Use the <code>cp</code> to overwrite <code>./rootfs/usr/lib/modules/5.15.136-tegra/updates/nvgpu.ko</code> with <code>./source/kernel_out/nvgpu/drivers/gpu/nvgpu/nvgpu.ko</code>.
# Repeat the previous step to replace <code>Linux_for_Tegra/kernel/dtb/</code> with </code>source/kernel_out/kernel/kernel-jammy-src/arch/arm64/boot/dts/nvidia</code>. Ensure that instead of overwriting the directory, only the files are copied over.
# Overwrite the Image file in <code>./kernel</code> with <code>./source/kernel_out/kernel/kernel-jammy-src/arch/arm64/boot/Image
</code>.
# To specify the installation path for the compiled modules, use the command <code>export INSTALL_MOD_PATH=$HOME/nvidia/nvidia_sdk/JetPack_6.0_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra/rootfs/</code>.
# Install modules using the command <code>./nvbuild -i</code>. The Jetson is now ready to be flashed.
== Flashing the Kernel ==
Ensure that the target Jetson is connected to the host device and is in recovery mode. Navigate to the <code>Linux for Tegra</code> directory and run <code>sudo ./nvsdkmanager_flash.sh</code>. When prompted, disconnect the Jetson from host device and allow it to boot. Congratulations, you have successfully flashed your Jetson with custom firmware.
2f886fe526ac33b1b2ce7060454ea3613933b307
1832
1831
2024-08-30T19:56:36Z
Vedant
24
/* Flashing the Kernel */
wikitext
text/x-wiki
= Flashing Standard Firmware =
== SDKManager ==
SDKManager is available only on Linux, and can be installed here: <code>https://developer.nvidia.com/sdk-manager</code>
# Start up the SDKManager
# Put the Jetson into recovery mode. For the AGX, this can be done by pressing the recovery button while powering on the device. For the Nano and NX, however, a jumper will be required.
# Connect the Target Jetson to the host device and ensure that the target device is recognized.
# Follow the instructions on the application, choosing configurations as necessary.
= Flashing Custom Firmware (For Jetson 36.3) =
== Pre-requisites ==
# Please install required packages with the command <code>sudo apt install build-essential bc && sudo apt install build-essential bc</code>.
=== Downloading the Toolchain ===
# Download the Toolchain binaries located in <code>https://developer.nvidia.com/embedded/jetson-linux</code>.
# From there, <code>mkdir $HOME/l4t-gcc</code>, <code>cd $HOME/l4t-gcc</code> and extract the installed toolchain into this newly created directory using the <code>tar</code> command.
== Downloading the Kernel ==
# Follow steps 1 to 3 as mentioned in Flashing Standard Firmware.
# Proceed to the second step of the SDKManager, where the respective individual dependencies and Jetson Images are listed and are to be installed. Proceed with the installation.
# When prompted to actually flash the Jetson, opt to skip. This will install the <code>nvidia</code> folder on your home directory, in which the <code>rootfs</code>, <code>kernel</code>, and <code>bootloader</code> are located.
# Navigate to <code>nvidia</code> and <code>cd</code> through its subdirectories, until <code>Linux for Tegra</code> is reached.
# Inside <code>Linux for Tegra</code>, <code>cd</code> into the <code>sources</code> folder. It should be unpopulated with the exception of some bash scripts. Run the <code>source_sync.sh</code> script and when asked to specify the release tag of the downloadable sources, enter <code>jetson_36.3</code>. This will install the sources for the respective Jetson version as necessary. To find the release tag of future iterations of the Jetson firmware, please refer to its respective release notes.
# Once sources have been synced, the <code>sources</code> directory should now be populated with the required files.
== Customizing Kernel ==
# Within <code>source</code>, enter the <code>kernel</code> eventually navigate to the <code>kernel-jammy-src</code> folder and run <code>make menuconfig ARCH=arm64</code>. This will bring up a UI with configurable drivers and peripherals. Select desired configurations and save.
# The configurations can be found within a <code>.config</code> file located within the same directory. Copy the contents and locate the <code>defconfig</code> file in <code>./arch/arm64/configs/</code>, overwriting it with the copied contents.
== Building Custom Kernel and Installing Modules ==
# Navigate back out to <code>sources</code>.
# Define the Cross-compilation toolchain with the commands <code>export CROSS_COMPILE=<toolchain-path>/bin/aarch64-buildroot-linux-gnu-</code>. If installation was done correctly as per the pre-requisites section, the command <code>export CROSS_COMPILE=$HOME/l4t-gcc/aarch64--glibc--stable-2022.08-1/bin/aarch64-buildroot-linux-gnu-</code> should work.
# Define the Cross-compilation toolchain with the commands <code>export CROSS_COMPILE_AARCH64_PATH=</code>, and <code>export CROSS_COMPILE_AARCH64=/bin/aarch64-buildroot-linux-gnu-</code>. (Potentially deprecated)
# Inside the sources, directory, make an output directory for built kernel files using <code>mkdir kernel_out</code>.
# Build the modules using the command <code>./nvbuild.sh -o kernel_out</code>. This will compile the drivers and device trees for the new kernel.
# Navigate out from the <code>sources</code> directory into the <code>Linux for Tegra</code>.
# Use the <code>cp</code> to overwrite <code>./rootfs/usr/lib/modules/5.15.136-tegra/updates/nvgpu.ko</code> with <code>./source/kernel_out/nvgpu/drivers/gpu/nvgpu/nvgpu.ko</code>.
# Repeat the previous step to replace <code>Linux_for_Tegra/kernel/dtb/</code> with </code>source/kernel_out/kernel/kernel-jammy-src/arch/arm64/boot/dts/nvidia</code>. Ensure that instead of overwriting the directory, only the files are copied over.
# Overwrite the Image file in <code>./kernel</code> with <code>./source/kernel_out/kernel/kernel-jammy-src/arch/arm64/boot/Image
</code>.
# To specify the installation path for the compiled modules, use the command <code>export INSTALL_MOD_PATH=$HOME/nvidia/nvidia_sdk/JetPack_6.0_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra/rootfs/</code>.
# Install modules using the command <code>./nvbuild -i</code>. The Jetson is now ready to be flashed.
== Flashing the Kernel ==
Ensure that the target Jetson is connected to the host device and is in recovery mode. Navigate to the <code>Linux for Tegra</code> directory and run <code>sudo ./nvsdkmanager_flash.sh</code>. When prompted, disconnect the Jetson from host device and allow it to boot. Congratulations, you have successfully flashed your Jetson with custom firmware.
8c848c263b78b63b0e05a2211cc99ba3f2f27518
Jetson Orin Notes
0
218
1824
1441
2024-08-29T04:59:11Z
Budzianowski
19
wikitext
text/x-wiki
Notes on programming/interfacing with Jetson Orin hardware.
=== Upgrading AGX to Jetson Linux 36.3 ===
==== BSP approach (avoids SDK Manager) ====
* Requires Ubuntu 22.04. Very unhappy to work on Gentoo.
* Requires Intel/AMD 64bit CPU.
* Download "Driver Package (BSP)" from [https://developer.nvidia.com/embedded/jetson-linux here]
* Unpack (as root, get used to doing most of this as root), preserving privileges
** <code>tar xjpf ...</code>
* Download "Sample Root Filesystem"
* Unpack (as root..) into rootfs directory inside of the BSP archive above.
* Run <code>sudo ./tools/l4t_flash_prerequisites.sh</code>
* Run <code>./apply_binaries.sh</code> from the BSP
** Note: If apply_binaries (or frankly, anything, this is brittle) fails, remove and recreate rootfs - the OS might be left in an unbootable state.
* Reboot AGX into "Recovery Mode" - hold the recovery button and reset button, release simultaneously ((sic) reset first?)
* Connect USB-C cable to the debug port ("front" USB-c)
* Nvidia AGX device should appear in the <code>lsusb</code> under NVIDIA CORP. APX
* Run <code>./flash.sh </code> Different options for different usecases(https://docs.nvidia.com/jetson/archives/r36.3/DeveloperGuide/IN/QuickStart.html#in-quickstart)
Jetson AGX Orin Developer Kit (eMMC):
<code>$ sudo ./flash.sh jetson-agx-orin-devkit internal</code>
* Watch for few minutes, typically it crashes early, then go for lunch.
=== Upgrading Nano to Jetson Linux 36.3 ===
==== Buildroot approach (avoids SDK Manager) ====
sudo mkfs.ext4 /dev/sdb
sudo umount /dev/sdb
lsblk -f
* Transfer rootfs onto sd card
sudo dd if=/home/dpsh/Downloads/rootfs.ext4 of=/dev/sdc1 bs=1M status=progress
* Make sure the data was transferred to SD/NVMe
sync
[[Category: Firmware]]
8e242c09b16592da87cabe1bd4ceec9d04ca3329
File:Jetson BCM Mapping.png
6
421
1833
2024-09-01T01:28:50Z
Goblinrum
25
wikitext
text/x-wiki
BCM Pinout names for Jetson
99ccf6f68f4e2fd72ca8353d17c4471f5c21a831
Waveshare LCDs
0
422
1834
2024-09-01T01:55:56Z
Goblinrum
25
Created page with "Instructions to run the [https://www.waveshare.com/wiki/1.69inch_LCD_Module Waveshare 1.69in LCDs] on the Jetson carrier boards [[File:Jetson BCM Mapping.png|thumb]] Note th..."
wikitext
text/x-wiki
Instructions to run the [https://www.waveshare.com/wiki/1.69inch_LCD_Module Waveshare 1.69in LCDs] on the Jetson carrier boards
[[File:Jetson BCM Mapping.png|thumb]]
Note the Linux BCM mappings in the image shown. Theoretically with the Jetson.GPIO library, you are able to use either the physical board pin mapping or the BCM mapping, but this process has only been vetted for the BCM mode. When referring to pins in the Python package, use the BCM mapping. For example, if you use pin 32, use "D12" when referring to the pin. With the Adafruit package, this is written as "board.D12"
The [https://www.waveshare.com/wiki/1.5inch_LCD_Module 1.5 Inch LCD Module] page actually tells you how to set up the Jetson for the Waveshare LCDs. However, since it uses a different controller, the actual firmware needs to be modified.
You can find the draft changes repo [https://github.com/goblinrum/waveshare-1.69in-lcd-jetson here].
Steps to reproduce:
# On Jetson, use the jetson-io to enable SPI1. Steps are shown in the 1.5 inch LCD Module guide.
# Power down the Jetson and wire up the display using the following pinout:
{| class="wikitable"
|-
! Function !! Jetson Pin !! BCM Pin
|-
| Example || Example || Example
|-
| Example || Example || Example
|-
| Example || Example || Example
|-
| Example || Example || Example
|-
| Example || Example || Example
|-
| Example || Example || Example
|-
| Example || Example || Example
|-
| Example || Example || Example
|}
# Clone the repository and run the code. You may need to install the following dependencies:
TBD
75f55547dc83d7e670fd48303a13509f946e8192
1835
1834
2024-09-01T01:56:38Z
Goblinrum
25
wikitext
text/x-wiki
Instructions to run the [https://www.waveshare.com/wiki/1.69inch_LCD_Module Waveshare 1.69in LCDs] on the Jetson carrier boards
[[File:Jetson BCM Mapping.png|thumb]]
Note the Linux BCM mappings in the image shown. Theoretically with the Jetson.GPIO library, you are able to use either the physical board pin mapping or the BCM mapping, but this process has only been vetted for the BCM mode. When referring to pins in the Python package, use the BCM mapping. For example, if you use pin 32, use "D12" when referring to the pin. With the Adafruit package, this is written as "board.D12"
The [https://www.waveshare.com/wiki/1.5inch_LCD_Module 1.5 Inch LCD Module] page actually tells you how to set up the Jetson for the Waveshare LCDs. However, since it uses a different controller, the actual firmware needs to be modified.
You can find the draft changes repo [https://github.com/goblinrum/waveshare-1.69in-lcd-jetson here].
Steps to reproduce:
# On Jetson, use the jetson-io to enable SPI1. Steps are shown in the 1.5 inch LCD Module guide.
# Power down the Jetson and wire up the display using the following pinout:
{| class="wikitable"
|-
! Function !! Jetson Pin !! BCM Pin
|-
| Example || Example || Example
|-
| Example || Example || Example
|-
| Example || Example || Example
|-
| Example || Example || Example
|-
| Example || Example || Example
|-
| Example || Example || Example
|-
| Example || Example || Example
|-
| Example || Example || Example
|}
# Clone the repository and run the code. You may need to install the following dependencies:
TBD
[[Category:Electronics]]
[[Category:Hardware]]
0d010861374cc127382bf05d9b1fff01bef2e185
1836
1835
2024-09-01T02:07:25Z
Goblinrum
25
wikitext
text/x-wiki
Instructions to run the [https://www.waveshare.com/wiki/1.69inch_LCD_Module Waveshare 1.69in LCDs] on the Jetson carrier boards
[[File:Jetson BCM Mapping.png|thumb]]
Note the Linux BCM mappings in the image shown. Theoretically with the Jetson.GPIO library, you are able to use either the physical board pin mapping or the BCM mapping, but this process has only been vetted for the BCM mode. When referring to pins in the Python package, use the BCM mapping. For example, if you use pin 32, use "D12" when referring to the pin. With the Adafruit package, this is written as "board.D12"
The [https://www.waveshare.com/wiki/1.5inch_LCD_Module 1.5 Inch LCD Module] page actually tells you how to set up the Jetson for the Waveshare LCDs. However, since it uses a different controller, the actual firmware needs to be modified.
You can find the draft changes repo [https://github.com/goblinrum/waveshare-1.69in-lcd-jetson here].
Steps to reproduce:
# On Jetson, use the jetson-io to enable SPI1. Steps are shown in the 1.5 inch LCD Module guide.
# Power down the Jetson and wire up the display using the following pinout:
{| class="wikitable"
|-
! Function !! Jetson Pin !! BCM Pin
|-
| 3v3 || 1 || 3v3
|-
| GND || 6 || GND
|-
| DIN/MOSI || 19 || D10 (Automatically selected by Jetson-GPIO)
|-
| SCK || 23 || D11 (Automatically selected)
|-
| CS || 24 || CE0 (Automatically selected)
|-
| DC || 22 || D25
|-
| RST || 31 || D6
|-
| BL || 32 || D12
|}
# Clone the repository and run the code. You may need to install the following dependencies:
TBD
[[Category:Electronics]]
[[Category:Hardware]]
eb7c3c777a5bf2f1ac8bff532d8bc49069d4054d
1837
1836
2024-09-01T02:12:21Z
Goblinrum
25
wikitext
text/x-wiki
Instructions to run the [https://www.waveshare.com/wiki/1.69inch_LCD_Module Waveshare 1.69in LCDs] on the Jetson carrier boards
[[File:Jetson BCM Mapping.png|thumb]]
Note the Linux BCM mappings in the image shown. Theoretically with the Jetson.GPIO library, you are able to use either the physical board pin mapping or the BCM mapping, but this process has only been vetted for the BCM mode. When referring to pins in the Python package, use the BCM mapping. For example, if you use pin 32, use "D12" when referring to the pin. With the Adafruit package, this is written as "board.D12"
The [https://www.waveshare.com/wiki/1.5inch_LCD_Module 1.5 Inch LCD Module] page actually tells you how to set up the Jetson for the Waveshare LCDs. However, since it uses a different controller, the actual firmware needs to be modified.
You can find the draft changes repo [https://github.com/goblinrum/waveshare-1.69in-lcd-jetson here].
Steps to reproduce:
# On Jetson, use the jetson-io to enable SPI1. Steps are shown in the 1.5 inch LCD Module guide.
# Power down the Jetson and wire up the display using the following pinout:
{| class="wikitable"
|-
! Function !! Jetson Pin !! BCM Pin
|-
| 3v3 || 1 || 3v3
|-
| GND || 6 || GND
|-
| DIN/MOSI || 19 || D10 (Automatically selected by Jetson-GPIO)
|-
| SCK || 23 || D11 (Automatically selected)
|-
| CS || 24 || CE0 (Automatically selected)
|-
| DC || 22 || D25
|-
| RST || 31 || D6
|-
| BL || 32 || D12
|}
Clone the repository and run the code. You may need to install the following pip dependencies:
* Jetson.GPIO
* adafruit-circuitpython-busdevice
* spidev
* adafruit-blinka-displayio
* adafruit-circuitpython-rgb-display
* adafruit-circuitpython-st7789
* Pillow
[[Category:Electronics]]
[[Category:Hardware]]
9c0a1a40ccd459a7e5ba7ec9f76ca4a5d6b5a743
Waveshare LCDs
0
422
1838
1837
2024-09-01T02:13:24Z
Goblinrum
25
wikitext
text/x-wiki
Instructions to run the [https://www.waveshare.com/wiki/1.69inch_LCD_Module Waveshare 1.69in LCDs] on the Jetson carrier boards
[[File:Jetson BCM Mapping.png|thumb]]
Note the Linux BCM mappings in the image shown. Theoretically with the Jetson.GPIO library, you are able to use either the physical board pin mapping or the BCM mapping, but this process has only been vetted for the BCM mode. When referring to pins in the Python package, use the BCM mapping. For example, if you use pin 32, use "D12" when referring to the pin. With the Adafruit package, this is written as "board.D12"
The [https://www.waveshare.com/wiki/1.5inch_LCD_Module 1.5 Inch LCD Module] page actually tells you how to set up the Jetson for the Waveshare LCDs. However, since it uses a different controller, the actual firmware needs to be modified.
You can find the draft changes repo [https://github.com/goblinrum/waveshare-1.69in-lcd-jetson here].
Steps to reproduce:
# On Jetson, use the jetson-io to enable SPI1. Steps are shown in the 1.5 inch LCD Module guide.
# Power down the Jetson and wire up the display using the following pinout:
{| class="wikitable"
|-
! Function !! Jetson Pin !! BCM Pin
|-
| 3v3 || 1 || 3v3
|-
| GND || 6 || GND
|-
| DIN/MOSI || 19 || D10 (Automatically selected by Jetson-GPIO)
|-
| SCK || 23 || D11 (Automatically selected)
|-
| CS || 24 || CE0 (Automatically selected)
|-
| DC || 22 || D25
|-
| RST || 31 || D6
|-
| BL || 32 || D12
|}
Clone the repository and launch run.py. You may need to install the following pip dependencies:
* Jetson.GPIO
* adafruit-circuitpython-busdevice
* spidev
* adafruit-blinka-displayio
* adafruit-circuitpython-rgb-display
* adafruit-circuitpython-st7789
* Pillow
The example code should draw some shapes every 2 seconds, followed by the Waveshare logo, then a Yuan dynasty poem, and lastly a series of images.
[[Category:Electronics]]
[[Category:Hardware]]
cb571f95e5c592772ce3e0c3ca12dc1ced632b3e
1839
1838
2024-09-01T02:18:47Z
Goblinrum
25
wikitext
text/x-wiki
Instructions to run the [https://www.waveshare.com/wiki/1.69inch_LCD_Module Waveshare 1.69in LCDs] on the Jetson carrier boards
[[File:Jetson BCM Mapping.png|thumb]]
Note the Linux BCM mappings in the image shown. Theoretically with the Jetson.GPIO library, you are able to use either the physical board pin mapping or the BCM mapping, but this process has only been vetted for the BCM mode. When referring to pins in the Python package, use the BCM mapping. For example, if you use pin 32, use "D12" when referring to the pin. With the Adafruit package, this is written as "board.D12"
The [https://www.waveshare.com/wiki/1.5inch_LCD_Module 1.5 Inch LCD Module] page actually tells you how to set up the Jetson for the Waveshare LCDs. However, since it uses a different controller, the actual firmware needs to be modified.
You can find the draft changes repo [https://github.com/goblinrum/waveshare-1.69in-lcd-jetson here].
Steps to reproduce:
# On Jetson, use the jetson-io to enable SPI1. Steps are shown in the [https://www.waveshare.com/wiki/1.5inch_LCD_Module#Jetson_Nano_User_Guide Jetson Nano section in the 1.5in LCD Guide]
# Power down the Jetson and wire up the display using the following pinout:
{| class="wikitable"
|-
! Function !! Jetson Pin !! BCM Pin
|-
| 3v3 || 1 || 3v3
|-
| GND || 6 || GND
|-
| DIN/MOSI || 19 || D10 (Automatically selected by Jetson-GPIO)
|-
| SCK || 23 || D11 (Automatically selected)
|-
| CS || 24 || CE0 (Automatically selected)
|-
| DC || 22 || D25
|-
| RST || 31 || D6
|-
| BL || 32 || D12
|}
Clone the repository and launch run.py. You may need to install the following pip dependencies:
* Jetson.GPIO
* adafruit-circuitpython-busdevice
* spidev
* adafruit-blinka-displayio
* adafruit-circuitpython-rgb-display
* adafruit-circuitpython-st7789
* Pillow
The example code should draw some shapes every 2 seconds, followed by the Waveshare logo, then a Yuan dynasty poem, and lastly a series of images.
[[Category:Electronics]]
[[Category:Hardware]]
3883a329872d7bb587a41ff9ebe452eb6ae2fa8e
1840
1839
2024-09-01T02:19:32Z
Goblinrum
25
wikitext
text/x-wiki
Instructions to run the [https://www.waveshare.com/wiki/1.69inch_LCD_Module Waveshare 1.69in LCDs] on the Jetson carrier boards
[[File:Jetson BCM Mapping.png|thumb]]
Note the Linux BCM mappings in the image shown. Theoretically with the Jetson.GPIO library, you are able to use either the physical board pin mapping or the BCM mapping, but this process has only been vetted for the BCM mode. When referring to pins in the Python package, use the BCM mapping. For example, if you use pin 32, use "D12" when referring to the pin. With the Adafruit package, this is written as "board.D12"
The [https://www.waveshare.com/wiki/1.5inch_LCD_Module 1.5 Inch LCD Module] page actually tells you how to set up the Jetson for the Waveshare LCDs. However, since it uses a different controller, the actual firmware needs to be modified.
You can find the draft changes repo [https://github.com/goblinrum/waveshare-1.69in-lcd-jetson here].
Steps to reproduce:
# On Jetson, use the jetson-io to enable SPI1. Steps are shown in the [https://www.waveshare.com/wiki/1.5inch_LCD_Module#Jetson_Nano_User_Guide Jetson Nano section in the 1.5in LCD Guide]. Follow the "Enable SPI" and "Library Installation" sections. Stop before the "Download Example" section.
# Power down the Jetson and wire up the display using the following pinout:
{| class="wikitable"
|-
! Function !! Jetson Pin !! BCM Pin
|-
| 3v3 || 1 || 3v3
|-
| GND || 6 || GND
|-
| DIN/MOSI || 19 || D10 (Automatically selected by Jetson-GPIO)
|-
| SCK || 23 || D11 (Automatically selected)
|-
| CS || 24 || CE0 (Automatically selected)
|-
| DC || 22 || D25
|-
| RST || 31 || D6
|-
| BL || 32 || D12
|}
Clone the repository and launch run.py. You may need to install the following pip dependencies:
* Jetson.GPIO
* adafruit-circuitpython-busdevice
* spidev
* adafruit-blinka-displayio
* adafruit-circuitpython-rgb-display
* adafruit-circuitpython-st7789
* Pillow
The example code should draw some shapes every 2 seconds, followed by the Waveshare logo, then a Yuan dynasty poem, and lastly a series of images.
[[Category:Electronics]]
[[Category:Hardware]]
c79acdb372a5c077bd10dde5cb3753ac8734d6b0
1841
1840
2024-09-01T23:58:59Z
Goblinrum
25
wikitext
text/x-wiki
Instructions to run the [https://www.waveshare.com/wiki/1.69inch_LCD_Module Waveshare 1.69in LCDs] on the Jetson carrier boards
[[File:Jetson BCM Mapping.png|thumb]]
Note the Linux BCM mappings in the image shown. Theoretically with the Jetson.GPIO library, you are able to use either the physical board pin mapping or the BCM mapping, but this process has only been vetted for the BCM mode. When referring to pins in the Python package, use the BCM mapping. For example, if you use pin 32, use "D12" when referring to the pin. With the Adafruit package, this is written as "board.D12"
The [https://www.waveshare.com/wiki/1.5inch_LCD_Module 1.5 Inch LCD Module] page actually tells you how to set up the Jetson for the Waveshare LCDs. However, since it uses a different controller, the actual firmware needs to be modified.
You can find the draft changes repo [https://github.com/goblinrum/waveshare-1.69in-lcd-jetson here].
Steps to reproduce:
# On Jetson, use the jetson-io to enable SPI1. Steps are shown in the [https://www.waveshare.com/wiki/1.5inch_LCD_Module#Jetson_Nano_User_Guide Jetson Nano section in the 1.5in LCD Guide]. Follow the "Enable SPI" and "Library Installation" sections. Stop before the "Download Example" section.
# Power down the Jetson and wire up the display using the following pinout:
{| class="wikitable"
|-
! Function !! Jetson Pin !! BCM Pin
|-
| 3v3 || 1 || 3v3
|-
| GND || 6 || GND
|-
| DIN/MOSI || 19 || D10 (Automatically selected by Jetson-GPIO)
|-
| SCK || 23 || D11 (Automatically selected)
|-
| CS || 24 || CE0 (Automatically selected)
|-
| DC || 22 || D25
|-
| RST || 31 || D6
|-
| BL || 32 || D12
|}
Clone the repository and launch run.py. You may need to install the following pip dependencies:
* Jetson.GPIO
* adafruit-circuitpython-busdevice
* spidev
* adafruit-blinka-displayio
* adafruit-circuitpython-rgb-display
* adafruit-circuitpython-st7789
* Pillow
You may also need to run "sudo modprobe spidev" to get the spidev driver loaded.
The example code should draw some shapes every 2 seconds, followed by the Waveshare logo, then a Yuan dynasty poem, and lastly a series of images.
== Testing with 2 LCDs ==
Ideally, we run both screens off the same SPI line and use the built-in chip-select pins on the Jetson. However, based on current testing, the Jetson-IO CS functionality does not work. If a device is connected to CS1 and CS0, CS0 will always remain low. The code in the repository assumes you use both spi lines (SPI0 and SPI2, so spidev0.0 and spidev2.0 respectively).
Wire up the second display as such:
{| class="wikitable"
|-
! Function !! Jetson Pin !! BCM Pin
|-
| 3v3 || 17 || 3v3
|-
| GND || 9 || GND
|-
| DIN/MOSI || 37 || D26 (Automatically selected by Jetson-GPIO)
|-
| SCK || 13 || D27 (Automatically selected)
|-
| CS || 18 || D24 (Automatically selected)
|-
| DC || 36 || D16
|-
| RST || 38 || D20
|-
| BL || 33 || D13
|}
The code in the repository should already specify these pins for the second screen.
[[Category:Electronics]]
[[Category:Hardware]]
37a0d6cb66a85e591d325ab077ebee376e09205d
Jetson Orin Notes
0
218
1842
1824
2024-09-05T16:08:22Z
Budzianowski
19
/* Buildroot approach (avoids SDK Manager) */
wikitext
text/x-wiki
Notes on programming/interfacing with Jetson Orin hardware.
=== Upgrading AGX to Jetson Linux 36.3 ===
==== BSP approach (avoids SDK Manager) ====
* Requires Ubuntu 22.04. Very unhappy to work on Gentoo.
* Requires Intel/AMD 64bit CPU.
* Download "Driver Package (BSP)" from [https://developer.nvidia.com/embedded/jetson-linux here]
* Unpack (as root, get used to doing most of this as root), preserving privileges
** <code>tar xjpf ...</code>
* Download "Sample Root Filesystem"
* Unpack (as root..) into rootfs directory inside of the BSP archive above.
* Run <code>sudo ./tools/l4t_flash_prerequisites.sh</code>
* Run <code>./apply_binaries.sh</code> from the BSP
** Note: If apply_binaries (or frankly, anything, this is brittle) fails, remove and recreate rootfs - the OS might be left in an unbootable state.
* Reboot AGX into "Recovery Mode" - hold the recovery button and reset button, release simultaneously ((sic) reset first?)
* Connect USB-C cable to the debug port ("front" USB-c)
* Nvidia AGX device should appear in the <code>lsusb</code> under NVIDIA CORP. APX
* Run <code>./flash.sh </code> Different options for different usecases(https://docs.nvidia.com/jetson/archives/r36.3/DeveloperGuide/IN/QuickStart.html#in-quickstart)
Jetson AGX Orin Developer Kit (eMMC):
<code>$ sudo ./flash.sh jetson-agx-orin-devkit internal</code>
* Watch for few minutes, typically it crashes early, then go for lunch.
=== Upgrading Nano to Jetson Linux 36.3 ===
==== Buildroot approach (avoids SDK Manager) ====
* sudo mkfs.ext4 /dev/sdb
* sudo umount /dev/sdb
* lsblk -f
* Transfer rootfs onto sd card
sudo dd if=/home/dpsh/Downloads/rootfs.ext4 of=/dev/sdc1 bs=1M status=progress
* Make sure the data was transferred to SD/NVMe
sync
[[Category: Firmware]]
e860a6679ea564200e493b82b88d39453461a277
1843
1842
2024-09-05T16:09:00Z
Budzianowski
19
wikitext
text/x-wiki
Notes on programming/interfacing with Jetson Orin hardware.
=== Upgrading AGX to Jetson Linux 36.3 ===
==== BSP approach (avoids SDK Manager) ====
* Requires Ubuntu 22.04. Very unhappy to work on Gentoo.
* Requires Intel/AMD 64bit CPU.
* Download "Driver Package (BSP)" from [https://developer.nvidia.com/embedded/jetson-linux here]
* Unpack (as root, get used to doing most of this as root), preserving privileges
** <code>tar xjpf ...</code>
* Download "Sample Root Filesystem"
* Unpack (as root..) into rootfs directory inside of the BSP archive above.
* Run <code>sudo ./tools/l4t_flash_prerequisites.sh</code>
* Run <code>./apply_binaries.sh</code> from the BSP
** Note: If apply_binaries (or frankly, anything, this is brittle) fails, remove and recreate rootfs - the OS might be left in an unbootable state.
* Reboot AGX into "Recovery Mode" - hold the recovery button and reset button, release simultaneously ((sic) reset first?)
* Connect USB-C cable to the debug port ("front" USB-c)
* Nvidia AGX device should appear in the <code>lsusb</code> under NVIDIA CORP. APX
* Run <code>./flash.sh </code> Different options for different usecases(https://docs.nvidia.com/jetson/archives/r36.3/DeveloperGuide/IN/QuickStart.html#in-quickstart)
Jetson AGX Orin Developer Kit (eMMC):
<code>$ sudo ./flash.sh jetson-agx-orin-devkit internal</code>
* Watch for few minutes, typically it crashes early, then go for lunch.
=== Upgrading Nano to Jetson Linux 36.3 ===
==== Buildroot approach (avoids SDK Manager) ====
* <code>sudo mkfs.ext4 /dev/sdb </code>
* <code>sudo umount /dev/sdb </code>
* <code>lsblk -f</code>
* Transfer rootfs onto sd card
<code>sudo dd if=/home/dpsh/Downloads/rootfs.ext4 of=/dev/sdc1 bs=1M status=progress</code>
* Make sure the data was transferred to SD/NVMe
<code>sync</code>
[[Category: Firmware]]
ed5e5c1f2ecdb9ea931e6d41780bafa988339127
Dennis' Speech Project
0
293
1844
1331
2024-09-18T02:36:24Z
Ben
2
/* Papers */
wikitext
text/x-wiki
=== Papers ===
* [https://distill.pub/2017/ctc/ CTC]
* [https://arxiv.org/abs/1810.04805 BERT]
* [https://arxiv.org/abs/2006.11477 wav2vec 2.0]
* [https://arxiv.org/abs/2106.07447 HuBERT]
* [https://speechbot.github.io/ Textless NLP project]
* [https://arxiv.org/abs/2210.13438 Encodec]
* [https://arxiv.org/abs/2308.16692 SpeechTokenizer]
* [https://github.com/suno-ai/bark Suno Bark Model]
* [https://github.com/google-research/parti Parti]
f34bd46589e075ee29329d1e397f2bdb29fbd34b
IROS 24 Humanoid Papers
0
425
1847
2024-09-25T00:29:18Z
Vrtnis
21
/*Initial Commit*/
wikitext
text/x-wiki
{| class="wikitable"
! Session Code !! Session Title !! Session Type !! Paper Code !! Paper Title !! Authors !! Affiliations
|-
| ThBT12 || Learning from Humans || Regular session || ThBT12.3 || Learning Human-To-Humanoid Real-Time Whole-Body Teleoperation || He, Tairan; Luo, Zhengyi; Xiao, Wenli; Zhang, Chong; Kitani, Kris; Liu, Changliu; Shi, Guanya || Carnegie Mellon University; ETH Zurich
|-
| FrPI6T1 || Humanoid and Bipedal Locomotion || Teaser Session || FrPI6T1.3 || Whole-Body Humanoid Robot Locomotion with Human Reference || Zhang, Qiang; Cui, Peter; Yan, David; Sun, Jingkai; Duan, Yiqun; Han, Gang; Zhao, Wen; Zhang, Weining; Guo, Yijie; Zhang, Arthur; Xu, Renjing || The Hong Kong University of Science and Technology; Peter & David Robotics; University of Technology Sydney; PND Robotics; UBTECH Robotics
|-
| FrPI6T1 || Humanoid and Bipedal Locomotion || Teaser Session || FrPI6T1.13 || Fly by Book: How to Train a Humanoid Robot to Fly an Airplane Using Large Language Models || Kim, Hyungjoo; Min, Sungjae; Kang, Gyuree; Kim, Jihyeok; Shim, David Hyunchul || KAIST
|-
| FrPI6T1 || Humanoid and Bipedal Locomotion || Teaser Session || FrPI6T1.14 || Towards Designing a Low-Cost Humanoid Robot with Flex Sensors-Based Movement || Al Omoush, Muhammad H.; Kishore, Sameer; Mehigan, Tracey || Dublin City University; Middlesex University
|-
| MoWMT17 || AI Meets Autonomy: Vision, Language, and Autonomous Systems || Workshop || MoWMT17.1 || AI Meets Autonomy: Vision, Language, and Autonomous Systems || Wang, Wenshan; Zhang, Ji; Zhang, Haochen; Zhao, Shibo; Gupta, Abhinav; Ramanan, Deva; Zeng, Andy; Kim, Ayoung; Nieto-Granda, Carlos || Carnegie Mellon University; Google DeepMind; Seoul National University; DEVCOM U.S. Army Research Laboratory
|-
| MoWMT18 || Collecting, Managing and Utilizing Data through Embodied Robots || Workshop || MoWMT18.1 || Collecting, Managing and Utilizing Data through Embodied Robots || Saito, Namiko; Al-Sada, Mohammed; Shigemune, Hiroki; Tsumura, Ryosuke; Funabashi, Satoshi; Miyake, Tamon; Ogata, Tetsuya || The University of Edinburgh; Waseda University, Qatar University; Shibaura Institute of Technology; AIST; Waseda University
|-
| MoWPT2 || Long-Term Perception for Autonomy in Dynamic Human-Centric Environments: What Do Robots Need? || Workshop || MoWPT2.1 || Long-Term Perception for Autonomy in Dynamic Human-Centric Environments: What Do Robots Need? || Schmid, Lukas M.; Talak, Rajat; Zheng, Jianhao; Andersson, Olov; Oleynikova, Helen; Park, Jong Jin; Wald, Johanna; Siegwart, Roland; Tombari, Federico; Carlone, Luca || MIT; Stanford University; KTH Royal Institute; ETH Zurich; Amazon Lab126; Everyday Robots; Technische Universität München
|-
| TuWAT5 || Humanoid Hybrid Sprint || Tutorial || TuWAT5.1 || Humanoid Hybrid Sprint || Osokin, Ilya || Moscow Institute of Physics and Technology
|-
| TuBT11 || Legged and Humanoid Robots || Regular session || TuBT11.1 || Invariant Smoother for Legged Robot State Estimation with Dynamic Contact Event Information (I) || Yoon, Ziwon; Kim, Joon-Ha; Park, Hae-Won || Georgia Institute of Technology; KAIST
|-
| TuBT11 || Legged and Humanoid Robots || Regular session || TuBT11.2 || MorAL: Learning Morphologically Adaptive Locomotion Controller for Quadrupedal Robots on Challenging Terrains || Luo, Zeren; Dong, Yinzhao; Li, Xinqi; Huang, Rui; Shu, Zhengjie; Xiao, Erdong; Lu, Peng || The University of Hong Kong
|-
| WeAT1 || Embodied AI with Two Arms: Zero-Shot Learning, Safety and Modularity || Regular session || WeAT1.4 || Embodied AI with Two Arms: Zero-Shot Learning, Safety and Modularity || Varley, Jacob; Singh, Sumeet; Jain, Deepali; Choromanski, Krzysztof; Zeng, Andy; Basu Roy Chowdhury, Somnath; Dubey, Avinava; Sindhwani, Vikas || Google; Google DeepMind; UNC Chapel Hill; Google Brain, NYC
|-
| WeDT12 || Imitation Learning I || Regular session || WeDT12.1 || Uncertainty-Aware Haptic Shared Control with Humanoid Robots for Flexible Object Manipulation || Hara, Takumi; Sato, Takashi; Ogata, Tetsuya; Awano, Hiromitsu || Kyoto University; Waseda University
|-
| FrPI6T1 || Humanoid and Bipedal Locomotion || Teaser Session || FrPI6T1.16 || From CAD to URDF: Co-Design of a Jet-Powered Humanoid Robot Including CAD Geometry || Vanteddu, Punith Reddy; Nava, Gabriele; Bergonti, Fabio; L'Erario, Giuseppe; Paolino, Antonello; Pucci, Daniele || Istituto Italiano Di Tecnologia
|}
79687b1092bae56ead9433ab3d7a8fb333a927fc
1848
1847
2024-09-25T00:30:36Z
Vrtnis
21
Vrtnis moved page [[IROS 24 Humanoid]] to [[IROS 24 Humanoid Papers]]
wikitext
text/x-wiki
{| class="wikitable"
! Session Code !! Session Title !! Session Type !! Paper Code !! Paper Title !! Authors !! Affiliations
|-
| ThBT12 || Learning from Humans || Regular session || ThBT12.3 || Learning Human-To-Humanoid Real-Time Whole-Body Teleoperation || He, Tairan; Luo, Zhengyi; Xiao, Wenli; Zhang, Chong; Kitani, Kris; Liu, Changliu; Shi, Guanya || Carnegie Mellon University; ETH Zurich
|-
| FrPI6T1 || Humanoid and Bipedal Locomotion || Teaser Session || FrPI6T1.3 || Whole-Body Humanoid Robot Locomotion with Human Reference || Zhang, Qiang; Cui, Peter; Yan, David; Sun, Jingkai; Duan, Yiqun; Han, Gang; Zhao, Wen; Zhang, Weining; Guo, Yijie; Zhang, Arthur; Xu, Renjing || The Hong Kong University of Science and Technology; Peter & David Robotics; University of Technology Sydney; PND Robotics; UBTECH Robotics
|-
| FrPI6T1 || Humanoid and Bipedal Locomotion || Teaser Session || FrPI6T1.13 || Fly by Book: How to Train a Humanoid Robot to Fly an Airplane Using Large Language Models || Kim, Hyungjoo; Min, Sungjae; Kang, Gyuree; Kim, Jihyeok; Shim, David Hyunchul || KAIST
|-
| FrPI6T1 || Humanoid and Bipedal Locomotion || Teaser Session || FrPI6T1.14 || Towards Designing a Low-Cost Humanoid Robot with Flex Sensors-Based Movement || Al Omoush, Muhammad H.; Kishore, Sameer; Mehigan, Tracey || Dublin City University; Middlesex University
|-
| MoWMT17 || AI Meets Autonomy: Vision, Language, and Autonomous Systems || Workshop || MoWMT17.1 || AI Meets Autonomy: Vision, Language, and Autonomous Systems || Wang, Wenshan; Zhang, Ji; Zhang, Haochen; Zhao, Shibo; Gupta, Abhinav; Ramanan, Deva; Zeng, Andy; Kim, Ayoung; Nieto-Granda, Carlos || Carnegie Mellon University; Google DeepMind; Seoul National University; DEVCOM U.S. Army Research Laboratory
|-
| MoWMT18 || Collecting, Managing and Utilizing Data through Embodied Robots || Workshop || MoWMT18.1 || Collecting, Managing and Utilizing Data through Embodied Robots || Saito, Namiko; Al-Sada, Mohammed; Shigemune, Hiroki; Tsumura, Ryosuke; Funabashi, Satoshi; Miyake, Tamon; Ogata, Tetsuya || The University of Edinburgh; Waseda University, Qatar University; Shibaura Institute of Technology; AIST; Waseda University
|-
| MoWPT2 || Long-Term Perception for Autonomy in Dynamic Human-Centric Environments: What Do Robots Need? || Workshop || MoWPT2.1 || Long-Term Perception for Autonomy in Dynamic Human-Centric Environments: What Do Robots Need? || Schmid, Lukas M.; Talak, Rajat; Zheng, Jianhao; Andersson, Olov; Oleynikova, Helen; Park, Jong Jin; Wald, Johanna; Siegwart, Roland; Tombari, Federico; Carlone, Luca || MIT; Stanford University; KTH Royal Institute; ETH Zurich; Amazon Lab126; Everyday Robots; Technische Universität München
|-
| TuWAT5 || Humanoid Hybrid Sprint || Tutorial || TuWAT5.1 || Humanoid Hybrid Sprint || Osokin, Ilya || Moscow Institute of Physics and Technology
|-
| TuBT11 || Legged and Humanoid Robots || Regular session || TuBT11.1 || Invariant Smoother for Legged Robot State Estimation with Dynamic Contact Event Information (I) || Yoon, Ziwon; Kim, Joon-Ha; Park, Hae-Won || Georgia Institute of Technology; KAIST
|-
| TuBT11 || Legged and Humanoid Robots || Regular session || TuBT11.2 || MorAL: Learning Morphologically Adaptive Locomotion Controller for Quadrupedal Robots on Challenging Terrains || Luo, Zeren; Dong, Yinzhao; Li, Xinqi; Huang, Rui; Shu, Zhengjie; Xiao, Erdong; Lu, Peng || The University of Hong Kong
|-
| WeAT1 || Embodied AI with Two Arms: Zero-Shot Learning, Safety and Modularity || Regular session || WeAT1.4 || Embodied AI with Two Arms: Zero-Shot Learning, Safety and Modularity || Varley, Jacob; Singh, Sumeet; Jain, Deepali; Choromanski, Krzysztof; Zeng, Andy; Basu Roy Chowdhury, Somnath; Dubey, Avinava; Sindhwani, Vikas || Google; Google DeepMind; UNC Chapel Hill; Google Brain, NYC
|-
| WeDT12 || Imitation Learning I || Regular session || WeDT12.1 || Uncertainty-Aware Haptic Shared Control with Humanoid Robots for Flexible Object Manipulation || Hara, Takumi; Sato, Takashi; Ogata, Tetsuya; Awano, Hiromitsu || Kyoto University; Waseda University
|-
| FrPI6T1 || Humanoid and Bipedal Locomotion || Teaser Session || FrPI6T1.16 || From CAD to URDF: Co-Design of a Jet-Powered Humanoid Robot Including CAD Geometry || Vanteddu, Punith Reddy; Nava, Gabriele; Bergonti, Fabio; L'Erario, Giuseppe; Paolino, Antonello; Pucci, Daniele || Istituto Italiano Di Tecnologia
|}
79687b1092bae56ead9433ab3d7a8fb333a927fc
IROS 24 Humanoid
0
426
1849
2024-09-25T00:30:36Z
Vrtnis
21
Vrtnis moved page [[IROS 24 Humanoid]] to [[IROS 24 Humanoid Papers]]
wikitext
text/x-wiki
#REDIRECT [[IROS 24 Humanoid Papers]]
fcc971bed200572db45aba48064485d1a045ca43
Main Page
0
1
1850
1825
2024-09-25T03:01:07Z
Ben
2
add moteus n1
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Actuators ===
{| class="wikitable"
|-
! Actuator
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|-
| [[Encos]]
|-
| [[Steadywin]]
|-
| [[Elemental Motors]]
|}
=== Motor Controllers ===
{| class="wikitable"
|-
! Controller
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[moteus-n1]]
|-
| [[K-Scale Motor Controller]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Discord community ===
[https://discord.gg/kscale Discord]
2dabce6cd0bcc0c155449d9df615944b68aa0a92
1860
1850
2024-10-15T00:44:11Z
Ben
2
/* Actuators */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Actuators ===
{| class="wikitable"
|-
! Actuator
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|-
| [[Encos]]
|-
| [[Steadywin]]
|-
| [[Elemental Motors]]
|-
| [[Anydrive]]
|-
| [[HEBI]]
|-
| [[ZeroErr]]
|-
| [[CubeMars]]
|-
| [[MjBots]]
|-
| [[Dynamixel]]
|-
| [[LinEngineering]]
|-
| [[Faradyi]]
|}
=== Motor Controllers ===
{| class="wikitable"
|-
! Controller
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[moteus-n1]]
|-
| [[K-Scale Motor Controller]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Discord community ===
[https://discord.gg/kscale Discord]
a8dc4f1b6b1d4d006713d09bb23de4059b9a1485
1875
1860
2024-10-15T20:10:16Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Actuators ===
{| class="wikitable"
|-
! Actuator
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|-
| [[Encos]]
|-
| [[Steadywin]]
|-
| [[Elemental Motors]]
|-
| [[Anydrive]]
|-
| [[HEBI]]
|-
| [[ZeroErr]]
|-
| [[CubeMars]]
|-
| [[MjBots]]
|-
| [[Dynamixel]]
|-
| [[LinEngineering]]
|-
| [[Faradyi]]
|}
=== Inference Boards ===
{| class="wikitable"
|-
! [[Jetson]]
|-
! [[SiMi AI]]
|}
=== Motor Controllers ===
{| class="wikitable"
|-
! Controller
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[moteus-n1]]
|-
| [[K-Scale Motor Controller]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Discord community ===
[https://discord.gg/kscale Discord]
5a7b9ab145828ab77f2e8b72ef7ae560d8a96943
1878
1875
2024-10-15T20:11:50Z
Ben
2
/* Inference Boards */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Actuators ===
{| class="wikitable"
|-
! Actuator
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|-
| [[Encos]]
|-
| [[Steadywin]]
|-
| [[Elemental Motors]]
|-
| [[Anydrive]]
|-
| [[HEBI]]
|-
| [[ZeroErr]]
|-
| [[CubeMars]]
|-
| [[MjBots]]
|-
| [[Dynamixel]]
|-
| [[LinEngineering]]
|-
| [[Faradyi]]
|}
=== Inference Boards ===
{| class="wikitable"
! Inference Boards
|-
! [[Jetson]]
|-
! [[SiMi AI]]
|}
=== Motor Controllers ===
{| class="wikitable"
|-
! Controller
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[moteus-n1]]
|-
| [[K-Scale Motor Controller]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Discord community ===
[https://discord.gg/kscale Discord]
465ac1454637d1479f82240e48386ceea17fecc0
1879
1878
2024-10-15T20:12:02Z
Ben
2
/* Inference Boards */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Actuators ===
{| class="wikitable"
|-
! Actuator
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|-
| [[Encos]]
|-
| [[Steadywin]]
|-
| [[Elemental Motors]]
|-
| [[Anydrive]]
|-
| [[HEBI]]
|-
| [[ZeroErr]]
|-
| [[CubeMars]]
|-
| [[MjBots]]
|-
| [[Dynamixel]]
|-
| [[LinEngineering]]
|-
| [[Faradyi]]
|}
=== Inference Boards ===
{| class="wikitable"
! Inference Boards
|-
| [[Jetson]]
|-
| [[SiMi AI]]
|}
=== Motor Controllers ===
{| class="wikitable"
|-
! Controller
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[moteus-n1]]
|-
| [[K-Scale Motor Controller]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Discord community ===
[https://discord.gg/kscale Discord]
d7232270c2f4dc8d43764e0a27f63916b48efda3
1880
1879
2024-10-15T20:12:42Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Components ===
{| class="wikitable"
|-
! Actuator
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|-
| [[Encos]]
|-
| [[Steadywin]]
|-
| [[Elemental Motors]]
|-
| [[Anydrive]]
|-
| [[HEBI]]
|-
| [[ZeroErr]]
|-
| [[CubeMars]]
|-
| [[MjBots]]
|-
| [[Dynamixel]]
|-
| [[LinEngineering]]
|-
| [[Faradyi]]
|}
{| class="wikitable"
! Inference Boards
|-
| [[Jetson]]
|-
| [[SiMi AI]]
|}
{| class="wikitable"
|-
! Motor Controllers
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[moteus-n1]]
|-
| [[K-Scale Motor Controller]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Discord community ===
[https://discord.gg/kscale Discord]
0818baec87ebf7beb8739466be9a07c361eb2d47
1881
1880
2024-10-15T20:13:10Z
Ben
2
/* Components */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Components ===
{| class="wikitable"
|-
! Actuator
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|-
| [[Encos]]
|-
| [[Steadywin]]
|-
| [[Elemental Motors]]
|-
| [[Anydrive]]
|-
| [[HEBI]]
|-
| [[ZeroErr]]
|-
| [[CubeMars]]
|-
| [[MjBots]]
|-
| [[Dynamixel]]
|-
| [[LinEngineering]]
|-
| [[Faradyi]]
|}
{| class="wikitable"
! Inference Boards
|-
| [[Jetson]]
|-
| [[SiMi AI]]
|}
{| class="wikitable"
|-
! Motor Controllers
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[moteus-n1]]
|-
| [[K-Scale Motor Controller]]
|}
{| class="wikitable"
|-
! Communication Protocols
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Discord community ===
[https://discord.gg/kscale Discord]
14ba2d4d342c28fa83de1ef3bb6b5fcbb8f3bf6d
1882
1881
2024-10-15T20:13:49Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
Feel free to join our [https://discord.gg/kscale Discord community]!
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Components ===
{| class="wikitable"
|-
! Actuator
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|-
| [[Encos]]
|-
| [[Steadywin]]
|-
| [[Elemental Motors]]
|-
| [[Anydrive]]
|-
| [[HEBI]]
|-
| [[ZeroErr]]
|-
| [[CubeMars]]
|-
| [[MjBots]]
|-
| [[Dynamixel]]
|-
| [[LinEngineering]]
|-
| [[Faradyi]]
|}
{| class="wikitable"
! Inference Boards
|-
| [[Jetson]]
|-
| [[SiMi AI]]
|}
{| class="wikitable"
|-
! Motor Controllers
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[moteus-n1]]
|-
| [[K-Scale Motor Controller]]
|}
{| class="wikitable"
|-
! Communication Protocols
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Discord community ===
[https://discord.gg/kscale Discord]
5a7e5ca3afdbb2309d27b7824752d6d36a9126da
1883
1882
2024-10-15T20:14:08Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
Feel free to join our [https://discord.gg/kscale Discord community] for more real-time discussion!
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable"
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Components ===
{| class="wikitable"
|-
! Actuator
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|-
| [[Encos]]
|-
| [[Steadywin]]
|-
| [[Elemental Motors]]
|-
| [[Anydrive]]
|-
| [[HEBI]]
|-
| [[ZeroErr]]
|-
| [[CubeMars]]
|-
| [[MjBots]]
|-
| [[Dynamixel]]
|-
| [[LinEngineering]]
|-
| [[Faradyi]]
|}
{| class="wikitable"
! Inference Boards
|-
| [[Jetson]]
|-
| [[SiMi AI]]
|}
{| class="wikitable"
|-
! Motor Controllers
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[moteus-n1]]
|-
| [[K-Scale Motor Controller]]
|}
{| class="wikitable"
|-
! Communication Protocols
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
2b5f4e9d54e910831c5f7167cf38ab9a475912fe
1884
1883
2024-10-15T20:17:43Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
Feel free to join our [https://discord.gg/kscale Discord community] for more real-time discussion!
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable mw-collapsible mw-collapsed"
|+ Resources
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Components ===
{| class="wikitable"
|-
! Actuator
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|-
| [[Encos]]
|-
| [[Steadywin]]
|-
| [[Elemental Motors]]
|-
| [[Anydrive]]
|-
| [[HEBI]]
|-
| [[ZeroErr]]
|-
| [[CubeMars]]
|-
| [[MjBots]]
|-
| [[Dynamixel]]
|-
| [[LinEngineering]]
|-
| [[Faradyi]]
|}
{| class="wikitable"
! Inference Boards
|-
| [[Jetson]]
|-
| [[SiMi AI]]
|}
{| class="wikitable"
|-
! Motor Controllers
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[moteus-n1]]
|-
| [[K-Scale Motor Controller]]
|}
{| class="wikitable"
|-
! Communication Protocols
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
d83f502a72e2747166766e3006da305da416f76d
1885
1884
2024-10-15T20:23:47Z
Ben
2
/* Components */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
Feel free to join our [https://discord.gg/kscale Discord community] for more real-time discussion!
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable mw-collapsible mw-collapsed"
|+ Resources
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Components ===
{| class="wikitable mw-collapsible mw-collapsed"
|+ Actuators
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|-
| [[Encos]]
|-
| [[Steadywin]]
|-
| [[Elemental Motors]]
|-
| [[Anydrive]]
|-
| [[HEBI]]
|-
| [[ZeroErr]]
|-
| [[CubeMars]]
|-
| [[MjBots]]
|-
| [[Dynamixel]]
|-
| [[LinEngineering]]
|-
| [[Faradyi]]
|}
{| class="wikitable mw-collapsible mw-collapsed"
|+ Inference Boards
|-
| [[Jetson]]
|-
| [[SiMi AI]]
|}
{| class="wikitable mw-collapsible mw-collapsed"
|+ Motor Controllers
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[moteus-n1]]
|-
| [[K-Scale Motor Controller]]
|}
{| class="wikitable mw-collapsible mw-collapsed"
|+ Communication Protocols
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
=== Communication Protocols ===
{| class="wikitable"
|-
! Name
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
9e4c5e2c0f7635ec53870629de045b8f1617d0af
1886
1885
2024-10-15T20:24:01Z
Ben
2
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
Feel free to join our [https://discord.gg/kscale Discord community] for more real-time discussion!
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable mw-collapsible mw-collapsed"
|+ Resources
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Components ===
{| class="wikitable mw-collapsible mw-collapsed"
|+ Actuators
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|-
| [[Encos]]
|-
| [[Steadywin]]
|-
| [[Elemental Motors]]
|-
| [[Anydrive]]
|-
| [[HEBI]]
|-
| [[ZeroErr]]
|-
| [[CubeMars]]
|-
| [[MjBots]]
|-
| [[Dynamixel]]
|-
| [[LinEngineering]]
|-
| [[Faradyi]]
|}
{| class="wikitable mw-collapsible mw-collapsed"
|+ Inference Boards
|-
| [[Jetson]]
|-
| [[SiMi AI]]
|}
{| class="wikitable mw-collapsible mw-collapsed"
|+ Motor Controllers
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[moteus-n1]]
|-
| [[K-Scale Motor Controller]]
|}
{| class="wikitable mw-collapsible mw-collapsed"
|+ Communication Protocols
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[Agibot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
8d07cc75401379baafe2d4995a4ac0c15bce6621
1887
1886
2024-10-24T20:37:44Z
Ben
2
/* Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
Feel free to join our [https://discord.gg/kscale Discord community] for more real-time discussion!
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable mw-collapsible mw-collapsed"
|+ Resources
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Components ===
{| class="wikitable mw-collapsible mw-collapsed"
|+ Actuators
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|-
| [[Encos]]
|-
| [[Steadywin]]
|-
| [[Elemental Motors]]
|-
| [[Anydrive]]
|-
| [[HEBI]]
|-
| [[ZeroErr]]
|-
| [[CubeMars]]
|-
| [[MjBots]]
|-
| [[Dynamixel]]
|-
| [[LinEngineering]]
|-
| [[Faradyi]]
|}
{| class="wikitable mw-collapsible mw-collapsed"
|+ Inference Boards
|-
| [[Jetson]]
|-
| [[SiMi AI]]
|}
{| class="wikitable mw-collapsible mw-collapsed"
|+ Motor Controllers
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[moteus-n1]]
|-
| [[K-Scale Motor Controller]]
|}
{| class="wikitable mw-collapsible mw-collapsed"
|+ Communication Protocols
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[AGIBot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
875143050a010a74e17a70cca80b02d329373bf5
1895
1887
2024-11-01T09:44:13Z
Kris
97
/* Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
Feel free to join our [https://discord.gg/kscale Discord community] for more real-time discussion!
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable mw-collapsible mw-collapsed"
|+ Resources
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Components ===
{| class="wikitable mw-collapsible mw-collapsed"
|+ Actuators
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|-
| [[Encos]]
|-
| [[Steadywin]]
|-
| [[Elemental Motors]]
|-
| [[Anydrive]]
|-
| [[HEBI]]
|-
| [[ZeroErr]]
|-
| [[CubeMars]]
|-
| [[MjBots]]
|-
| [[Dynamixel]]
|-
| [[LinEngineering]]
|-
| [[Faradyi]]
|}
{| class="wikitable mw-collapsible mw-collapsed"
|+ Inference Boards
|-
| [[Jetson]]
|-
| [[SiMi AI]]
|}
{| class="wikitable mw-collapsible mw-collapsed"
|+ Motor Controllers
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[moteus-n1]]
|-
| [[K-Scale Motor Controller]]
|}
{| class="wikitable mw-collapsible mw-collapsed"
|+ Communication Protocols
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[AGIBot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Humanoid]]
|
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
921072b715bb7ea3039c3bfdc708768db9b64af4
Moteus-n1
0
427
1851
2024-09-25T03:02:48Z
Ben
2
Created page with "The [https://mjbots.com/products/moteus-n1 moteus-n1] is a motor controller from MJBots. The project is open-source, with firmware available [https://github.com/mjbots/moteus..."
wikitext
text/x-wiki
The [https://mjbots.com/products/moteus-n1 moteus-n1] is a motor controller from MJBots.
The project is open-source, with firmware available [https://github.com/mjbots/moteus here].
{{infobox actuator
|name = moteus-n1
|manufacturer = MJBots
|cost =
|purchase_link = https://mjbots.com/products/moteus-n1
|nominal_torque =
|peak_torque =
|weight =
|dimensions =
|gear_ratio =
|voltage = 10 to 54VDC
|cad_link = https://mjbots.com/products/moteus-n1
|interface = CAN-FD
|gear_type =
}}
3750a5bd321567b4c0352403101bf292124c953d
1852
1851
2024-09-25T03:05:00Z
Ben
2
wikitext
text/x-wiki
The [https://mjbots.com/products/moteus-n1 moteus-n1] is a motor controller from MJBots.
The project is open-source, with firmware available [https://github.com/mjbots/moteus here].
{{infobox actuator
|name = moteus-n1
|manufacturer = MJBots
|cost =
|purchase_link = https://mjbots.com/products/moteus-n1
|nominal_torque =
|peak_torque =
|weight =
|dimensions =
|gear_ratio =
|voltage = 10 to 54VDC
|cad_link = https://mjbots.com/products/moteus-n1
|interface = CAN-FD
|gear_type =
}}
[[Category: Actuators]]
cf862aff52cea254fc3f87b9a58e1bad359c6be1
Anydrive
0
435
1861
2024-10-15T00:46:07Z
Ben
2
Created page with "Anydrive actuators, developed by ETH Zurich's Robotic Systems Lab, are high-performance, series elastic actuators designed for robotics. They offer precise torque, position, a..."
wikitext
text/x-wiki
Anydrive actuators, developed by ETH Zurich's Robotic Systems Lab, are high-performance, series elastic actuators designed for robotics. They offer precise torque, position, and impedance control, making them suitable for legged robots like ANYmal. These actuators integrate motors, custom springs, high-precision encoders, and power electronics in a compact, water-resistant design. Anydrive's modular nature simplifies assembly and maintenance, and they communicate over CAN and CANopen protocols with ROS compatibility.<ref>https://rsl.ethz.ch/robots-media/actuators/anydrive.html</ref><ref>https://rsl.ethz.ch/robots-media/actuators.html</ref><ref>https://researchfeatures.com/anymal-unique-quadruped-robot-conquering-harsh-environments/</ref>
== References ==
<references />
[[Category:Actuators]]
27d09c960fadbe2463e8c3e05a654c2e0ed9443c
1862
1861
2024-10-15T00:51:36Z
Ben
2
wikitext
text/x-wiki
'''ANYdrive''' is a high-performance actuator developed by [[ANYbotics]], a Swiss robotics company specializing in mobile legged robots. ANYdrive actuators are designed for robust and versatile robotic applications, particularly in challenging environments.
==Overview==
ANYdrive combines electric motor technology with integrated sensors and control electronics to provide dynamic and precise motion control. The actuator is used in ANYbotics' quadrupedal robots, such as the [[ANYmal]] robot.
==Features==
* '''High torque density''': ANYdrive actuators deliver high torque in a compact form factor.
* '''Integrated sensors''': Equipped with position, velocity, torque, and temperature sensors.
* '''Robust design''': Suitable for harsh environments with dust and water protection.
* '''Control electronics''': Integrated controllers for precise motion control.
==Applications==
ANYdrive is utilized in various robotic platforms for research, inspection, and industrial automation.
==References==
<references>
<ref name="ANYbotics ANYdrive">{{cite web | title=ANYdrive Actuator | url=https://www.anybotics.com/anydrive/ | publisher=ANYbotics | accessdate=October 14, 2024}}</ref>
</references>
921a85dd0bd23a7161e318ed06a482b6a4625c11
1863
1862
2024-10-15T00:53:03Z
Ben
2
wikitext
text/x-wiki
'''ANYdrive''' is a high-performance actuator developed by ANYbotics, a Swiss robotics company specializing in mobile legged robots.<ref name="ANYbotics ANYdrive">{{cite web | title=ANYdrive Actuator | url=https://www.anybotics.com/anydrive/ | publisher=ANYbotics | accessdate=October 14, 2024}}</ref> ANYdrive actuators are designed for robust and versatile robotic applications, particularly in challenging environments.
==Overview==
ANYdrive combines electric motor technology with integrated sensors and control electronics to provide dynamic and precise motion control.<ref name="ANYbotics ANYdrive" /> The actuator is used in ANYbotics' quadrupedal robots, such as the ANYmal robot.
==Features==
* '''High torque density''': ANYdrive actuators deliver high torque in a compact form factor.<ref name="ANYbotics ANYdrive" />
* '''Integrated sensors''': Equipped with position, velocity, torque, and temperature sensors.
* '''Robust design''': Suitable for harsh environments with dust and water protection.
* '''Control electronics''': Integrated controllers for precise motion control.
==Applications==
ANYdrive is utilized in various robotic platforms for research, inspection, and industrial automation.<ref name="ANYbotics ANYdrive" />
==References==
<references />
884275d4bba4eb380945931cc35aa2b771d0a994
1864
1863
2024-10-15T00:53:39Z
Ben
2
wikitext
text/x-wiki
'''ANYdrive''' is a high-performance actuator developed by ANYbotics, a Swiss robotics company specializing in mobile legged robots.<ref name="ANYbotics ANYdrive"><ref>https://www.anybotics.com/anydrive/</ref> ANYdrive actuators are designed for robust and versatile robotic applications, particularly in challenging environments.
==Overview==
ANYdrive combines electric motor technology with integrated sensors and control electronics to provide dynamic and precise motion control.<ref name="ANYbotics ANYdrive" /> The actuator is used in ANYbotics' quadrupedal robots, such as the ANYmal robot.
==Features==
* '''High torque density''': ANYdrive actuators deliver high torque in a compact form factor.<ref name="ANYbotics ANYdrive" />
* '''Integrated sensors''': Equipped with position, velocity, torque, and temperature sensors.
* '''Robust design''': Suitable for harsh environments with dust and water protection.
* '''Control electronics''': Integrated controllers for precise motion control.
==Applications==
ANYdrive is utilized in various robotic platforms for research, inspection, and industrial automation.<ref name="ANYbotics ANYdrive" />
==References==
<references />
1632ae12ee3f63d4ec842ecb74457141d06c89c5
1865
1864
2024-10-15T00:53:51Z
Ben
2
wikitext
text/x-wiki
'''ANYdrive''' is a high-performance actuator developed by ANYbotics, a Swiss robotics company specializing in mobile legged robots.<ref><ref>https://www.anybotics.com/anydrive/</ref> ANYdrive actuators are designed for robust and versatile robotic applications, particularly in challenging environments.
==Overview==
ANYdrive combines electric motor technology with integrated sensors and control electronics to provide dynamic and precise motion control.<ref name="ANYbotics ANYdrive" /> The actuator is used in ANYbotics' quadrupedal robots, such as the ANYmal robot.
==Features==
* '''High torque density''': ANYdrive actuators deliver high torque in a compact form factor.<ref name="ANYbotics ANYdrive" />
* '''Integrated sensors''': Equipped with position, velocity, torque, and temperature sensors.
* '''Robust design''': Suitable for harsh environments with dust and water protection.
* '''Control electronics''': Integrated controllers for precise motion control.
==Applications==
ANYdrive is utilized in various robotic platforms for research, inspection, and industrial automation.<ref name="ANYbotics ANYdrive" />
==References==
<references />
f30b77f049ef6846c5f6d73a916316c0aa39bcb7
1866
1865
2024-10-15T00:54:29Z
Ben
2
wikitext
text/x-wiki
'''ANYdrive''' is a high-performance actuator developed by ANYbotics, a Swiss robotics company specializing in mobile legged robots.<ref name="ANYbotics ANYdrive ">https://www.anybotics.com/anydrive/</ref> ANYdrive actuators are designed for robust and versatile robotic applications, particularly in challenging environments.
==Overview==
ANYdrive combines electric motor technology with integrated sensors and control electronics to provide dynamic and precise motion control.<ref name="ANYbotics ANYdrive" /> The actuator is used in ANYbotics' quadrupedal robots, such as the ANYmal robot.
==Features==
* '''High torque density''': ANYdrive actuators deliver high torque in a compact form factor.<ref name="ANYbotics ANYdrive" />
* '''Integrated sensors''': Equipped with position, velocity, torque, and temperature sensors.
* '''Robust design''': Suitable for harsh environments with dust and water protection.
* '''Control electronics''': Integrated controllers for precise motion control.
==Applications==
ANYdrive is utilized in various robotic platforms for research, inspection, and industrial automation.<ref name="ANYbotics ANYdrive" />
==References==
<references />
2a631698de9978dcc12b527452340e7621a55edc
HEBI
0
436
1867
2024-10-15T00:57:09Z
Ben
2
Created page with "'''HEBI Robotics''' is an American company that specializes in modular robotic components and systems. Founded in 2014 and based in Pittsburgh, Pennsylvania, HEBI Robotics pro..."
wikitext
text/x-wiki
'''HEBI Robotics''' is an American company that specializes in modular robotic components and systems. Founded in 2014 and based in Pittsburgh, Pennsylvania, HEBI Robotics provides hardware and software tools to create custom robots quickly and affordably.<ref name="HEBI About">https://www.hebirobotics.com/about</ref>
==Products==
HEBI offers modular actuators, known as '''X-Series Actuators''', which integrate motors, gearboxes, encoders, and control electronics into compact modules.<ref name="HEBI Products">https://www.hebirobotics.com/products</ref>
==Features==
* '''Modularity''': Allows for rapid prototyping and custom configurations.
* '''Integrated control''': Built-in controllers enable precise motion control.
* '''Software tools''': Provides APIs and software development kits for robot control.
==Applications==
HEBI's products are used in research, education, and industry for applications such as manipulation, mobile robotics, and human-robot interaction.<ref name="HEBI Applications">https://www.hebirobotics.com/applications</ref>
==References==
<references />
c9bc1465a61acce4433a2b862c3fae973d3cf8dd
1868
1867
2024-10-15T00:57:30Z
Ben
2
wikitext
text/x-wiki
'''HEBI Robotics''' is an American company that specializes in modular robotic components and systems. Founded in 2014 and based in Pittsburgh, Pennsylvania, HEBI Robotics provides hardware and software tools to create custom robots quickly and affordably.<ref name="HEBI About">https://www.hebirobotics.com/about</ref>
==Products==
HEBI offers modular actuators, known as '''X-Series Actuators''', which integrate motors, gearboxes, encoders, and control electronics into compact modules.
==Features==
* '''Modularity''': Allows for rapid prototyping and custom configurations.
* '''Integrated control''': Built-in controllers enable precise motion control.
* '''Software tools''': Provides APIs and software development kits for robot control.
==Applications==
HEBI's products are used in research, education, and industry for applications such as manipulation, mobile robotics, and human-robot interaction.<ref name="HEBI Applications">https://www.hebirobotics.com/applications</ref>
==References==
<references />
39a37dc77dbca9036d948ba74a921c9ce2fcb482
1869
1868
2024-10-15T00:57:53Z
Ben
2
wikitext
text/x-wiki
'''HEBI Robotics''' is an American company that specializes in modular robotic components and systems. Founded in 2014 and based in Pittsburgh, Pennsylvania, HEBI Robotics provides hardware and software tools to create custom robots quickly and affordably.<ref name="HEBI About">https://www.hebirobotics.com/about</ref>
==Products==
HEBI offers modular actuators, known as '''X-Series Actuators''', which integrate motors, gearboxes, encoders, and control electronics into compact modules.<ref>https://www.hebirobotics.com/actuators</ref>
==Features==
* '''Modularity''': Allows for rapid prototyping and custom configurations.
* '''Integrated control''': Built-in controllers enable precise motion control.
* '''Software tools''': Provides APIs and software development kits for robot control.
==Applications==
HEBI's products are used in research, education, and industry for applications such as manipulation, mobile robotics, and human-robot interaction.<ref name="HEBI Applications">https://www.hebirobotics.com/applications</ref>
==References==
<references />
98f415a6947c08f3ab1cd206b93d92d08c92b8d0
CubeMars
0
437
1870
2024-10-15T00:58:10Z
Ben
2
Created page with "'''CubeMars''' is a manufacturer of robotic actuators, motors, and related components. The company focuses on providing high-performance products for robotics, drones, and aut..."
wikitext
text/x-wiki
'''CubeMars''' is a manufacturer of robotic actuators, motors, and related components. The company focuses on providing high-performance products for robotics, drones, and automation industries.<ref name="CubeMars About">https://www.cubemars.com/about-us</ref>
==Products==
* '''Brushless DC Motors''': High-efficiency motors suitable for robotics.<ref name="CubeMars Products">https://www.cubemars.com/products</ref>
* '''Actuators''': Compact actuators with integrated control electronics.
* '''Robot Joints''': Modular joints for robotic arms and manipulators.
==Features==
* '''High Torque''': Motors and actuators designed for high-torque applications.
* '''Compact Design''': Products are designed to be space-efficient.
* '''Integrated Solutions''': Offers components that integrate multiple functions.
==Applications==
CubeMars products are used in robotic arms, exoskeletons, drones, and other automation equipment.<ref name="CubeMars Applications">https://www.cubemars.com/applications</ref>
==References==
<references />
ae0a5abb146997b5cdb86a719f667d9a23f7f5e5
MjBots
0
438
1871
2024-10-15T00:58:20Z
Ben
2
Created page with "'''mjbots''' is a robotics company that develops open-source hardware and software for high-performance robotic systems. Founded by Ben Katz, the company provides components s..."
wikitext
text/x-wiki
'''mjbots''' is a robotics company that develops open-source hardware and software for high-performance robotic systems. Founded by Ben Katz, the company provides components such as actuators, controllers, and sensors.<ref name="mjbots About">https://mjbots.com/pages/about</ref>
==Products==
* '''qdd100 Servo Actuator''': A high-torque direct-drive actuator.<ref name="mjbots Products">https://mjbots.com/collections/all</ref>
* '''Moteus Controller''': An open-source motor controller for high-bandwidth control.
* '''Components''': Various mechanical and electronic components for robotics.
==Features==
* '''Open Source''': Hardware and software designs are open-source.
* '''High Performance''': Components designed for dynamic and responsive control.
* '''Community Support''': Active community contributing to development.
==Applications==
mjbots products are used in legged robots, robotic arms, and other advanced robotic systems.<ref name="mjbots Applications">https://mjbots.com/applications</ref>
==References==
<references />
a9228b33a46d270fb0894791bf088cb64426b611
1874
1871
2024-10-15T00:58:53Z
Ben
2
wikitext
text/x-wiki
'''mjbots''' is a robotics company that develops open-source hardware and software for high-performance robotic systems. Founded by Ben Katz, the company provides components such as actuators, controllers, and sensors.<ref name="mjbots About">https://mjbots.com/pages/about</ref>
==Products==
* '''qdd100 Servo Actuator''': A high-torque direct-drive actuator.<ref name="mjbots Products">https://mjbots.com/collections/all</ref>
* '''Moteus Controller''': An open-source motor controller for high-bandwidth control.
* '''Components''': Various mechanical and electronic components for robotics.
==Features==
* '''Open Source''': Hardware and software designs are open-source.
* '''High Performance''': Components designed for dynamic and responsive control.
* '''Community Support''': Active community contributing to development.
==Applications==
mjbots products are used in legged robots, robotic arms, and other advanced robotic systems.<ref name="mjbots Applications">https://mjbots.com/applications</ref>
==References==
<references />
[[Category:Actuators]]
[[Category:Open Source]]
dafd1c8f81778549f9c6187d891185cd452f87ee
Dynamixel
0
439
1872
2024-10-15T00:58:28Z
Ben
2
Created page with "'''DYNAMIXEL''' is a line of smart actuators developed by the Korean robotics company Robotis. DYNAMIXEL servos are widely used in robotics for their integrated design, which..."
wikitext
text/x-wiki
'''DYNAMIXEL''' is a line of smart actuators developed by the Korean robotics company Robotis. DYNAMIXEL servos are widely used in robotics for their integrated design, which combines motor, gearbox, controller, and networking capabilities.<ref name="Robotis DYNAMIXEL">https://www.robotis.us/dynamixel/</ref>
==Features==
* '''Integrated Design''': Combines motor, reduction gear, controller, and network functionality.
* '''Daisy-Chain Networking''': Multiple units can be connected in series for simplified wiring.
* '''Programmable''': Supports various control modes and parameters.
* '''Feedback''': Provides real-time feedback on position, speed, load, voltage, and temperature.
==Product Lines==
* '''AX Series''': Entry-level actuators for education and hobbyists.
* '''MX Series''': Mid-level actuators with improved precision and control.
* '''X-Series''': Advanced actuators with high performance and reliability.
* '''Pro Series''': Professional-grade actuators for industrial applications.
==Applications==
DYNAMIXEL servos are used in humanoid robots, robotic arms, animatronics, and research projects.<ref name="Robotis Applications">https://www.robotis.us/applications/</ref>
==References==
<references />
2fdbba6f7732fe93e91325df2b89cd9d68b07aab
LinEngineering
0
440
1873
2024-10-15T00:58:36Z
Ben
2
Created page with "'''Lin Engineering''' is an American company specializing in the design and manufacturing of stepper motors and motion control solutions. Founded in 1987 and headquartered in..."
wikitext
text/x-wiki
'''Lin Engineering''' is an American company specializing in the design and manufacturing of stepper motors and motion control solutions. Founded in 1987 and headquartered in Morgan Hill, California, Lin Engineering serves various industries including medical, automotive, and industrial automation.<ref name="Lin Engineering About">https://www.linengineering.com/about-us/</ref>
==Products==
* '''Stepper Motors''': High-precision motors ranging from standard to custom designs.<ref name="Lin Engineering Products">https://www.linengineering.com/products/</ref>
* '''Integrated Motors''': Motors with built-in controllers and drivers.
* '''Motion Controllers''': Standalone controllers for precise motion applications.
* '''Accessories''': Gearboxes, encoders, and other motion control components.
==Features==
* '''Customization''': Offers tailored solutions to meet specific application needs.
* '''High Precision''': Motors designed for accurate positioning and repeatability.
* '''Quality Assurance''': Adheres to strict quality control standards.
==Applications==
Lin Engineering's products are used in medical devices, robotics, aerospace, and other precision-driven fields.<ref name="Lin Engineering Applications">https://www.linengineering.com/applications/</ref>
==References==
<references />
4598e3505b4ff49ec7ab65dd78e39e231a7e97ce
SiMi AI
0
441
1876
2024-10-15T20:10:50Z
Ben
2
Created page with "[https://sima.ai/ SiMa AI] is a manufacturer of RISC-based neural network inference boards."
wikitext
text/x-wiki
[https://sima.ai/ SiMa AI] is a manufacturer of RISC-based neural network inference boards.
fbb95138933c47669934129706fa578a8a8404ff
Jetson
0
442
1877
2024-10-15T20:11:31Z
Ben
2
Created page with "The [https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/product-development/ Jetson Nano] and [https://www.nvidia.com/en-us/autonomous-machines/embe..."
wikitext
text/x-wiki
The [https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/product-development/ Jetson Nano] and [https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/ Jetson Orin] are embedded devices produced by Nvidia for edge neural network inference.
c0240e802b3315a5ad00b5a0b5519b289f353807
AGIBot
0
64
1888
285
2024-10-24T20:38:01Z
Ben
2
Ben moved page [[Agibot]] to [[AGIBot]]
wikitext
text/x-wiki
The "Agi" in their name stands for "Artificial General Intelligence". Their CEO gave a detailed presentation here: https://www.youtube.com/watch?v=ZwjxbDVbGpU&t=1471s
{{infobox company
| name = Agibot
| country = China
| website_link = https://www.agibot.com/
| robots = [[RAISE-A1]]
}}
[[Category:Companies]]
73984c3a371c72de33880735a346e5eb1c3be56b
1890
1888
2024-10-24T20:44:19Z
Ben
2
wikitext
text/x-wiki
The "AGI" in their name stands for "Artificial General Intelligence".
{{infobox company
| name = Agibot
| country = China
| website_link = https://www.agibot.com/
| robots = [[RAISE-A1]]
}}
== Notable People ==
* [https://www.youtube.com/@%E7%A8%9A%E6%99%96%E5%90%9B 稚晖君]
== Links ==
* [https://www.youtube.com/watch?v=ZwjxbDVbGpU&t=1471s Presentation]
* [https://github.com/AgibotTech/agibot_x1_hardware AGIBot X1 Hardware]
* [https://github.com/AgibotTech/agibot_x1_train AGIBot X1 Training Code]
* [https://github.com/AgibotTech/agibot_x1_infer AGIBot X1 Inference Code]
[[Category:Companies]]
b8e52493948169a340e97f1c2a226db054656e2c
1891
1890
2024-10-24T20:45:23Z
Ben
2
/* Links */
wikitext
text/x-wiki
The "AGI" in their name stands for "Artificial General Intelligence".
{{infobox company
| name = Agibot
| country = China
| website_link = https://www.agibot.com/
| robots = [[RAISE-A1]]
}}
== Notable People ==
* [https://www.youtube.com/@%E7%A8%9A%E6%99%96%E5%90%9B 稚晖君]
== Links ==
* [https://www.agibot.com/ Website]
* [https://www.youtube.com/watch?v=ZwjxbDVbGpU&t=1471s Presentation]
* [https://github.com/AgibotTech/agibot_x1_hardware AGIBot X1 Hardware]
* [https://github.com/AgibotTech/agibot_x1_train AGIBot X1 Training Code]
* [https://github.com/AgibotTech/agibot_x1_infer AGIBot X1 Inference Code]
[[Category:Companies]]
f527436f5e30aa2b737b5f356ebb8bc2eacea210
Agibot
0
443
1889
2024-10-24T20:38:01Z
Ben
2
Ben moved page [[Agibot]] to [[AGIBot]]
wikitext
text/x-wiki
#REDIRECT [[AGIBot]]
a3dd58b9e61331e93f00fff769e504f7ba29f8c4
Figure 01
0
121
1892
434
2024-10-25T07:18:38Z
Ben
2
wikitext
text/x-wiki
Figure 01 is a humanoid robot from [[Figure AI]].
{{infobox robot
| name = Figure 01
| organization = [[Figure AI]]
| height = 167.6 cm
| weight = 60 kg
| payload = 20 kg
| runtime = 5 Hrs
| speed = 1.2 M/s
| video_link = https://www.youtube.com/watch?v=48qL8Jt39Vs
| cost =
}}
[[Category:Stompy, Expand!]]
[[Category:Robots]]
3e845df3b9f5a8e86ca31d18cb831bd78c69d235
Figure AI
0
122
1893
435
2024-10-25T07:18:47Z
Ben
2
wikitext
text/x-wiki
Figure AI is building a humanoid robot called [[Figure 01]].
{{infobox company
| name = Figure AI
| country = USA
| website_link = https://www.figure.ai/
| robots = [[Figure 01]]
}}
[[Category:Stompy, Expand!]]
[[Category:Companies]]
ba66fbc3e02a573b155b3bd19f79b79652a722c6
1894
1893
2024-10-29T05:34:00Z
Ben
2
wikitext
text/x-wiki
'''Figure Inc.''' is a robotics company focused on developing autonomous humanoid robots for industrial and potential consumer applications. Founded in 2022 in Silicon Valley, the company aims to address labor shortages by deploying general-purpose robots capable of performing complex tasks. Its current model, Figure 02, incorporates NVIDIA’s AI computing technologies, including NVIDIA H100 GPUs, enabling high-performance vision, dexterity, and conversational AI.
{{infobox company
| name = Figure AI
| country = USA
| website_link = https://www.figure.ai/
| robots = [[Figure 01]]
}}
== History ==
Founded by Brett Adcock, Figure demonstrated rapid early progress in humanoid robotics, leveraging synthetic data from NVIDIA Isaac Sim. The company’s humanoid robots were first deployed in testing environments, such as BMW's production line, to demonstrate their real-world applications.
== Technology and Partnerships ==
Figure’s robots utilize NVIDIA’s robotics stack, with models trained on NVIDIA DGX for large-scale AI capabilities and designed using the NVIDIA Omniverse platform. Despite having limited in-house AI expertise, Figure's collaboration with OpenAI has furthered its AI models’ conversational abilities, marking a unique blend of perception and interaction capabilities. Figure's Series B funding announcement, amounting to $675 million, supports continued research and deployment of these robotic solutions in industrial settings.
== Goals and Future Directions ==
Figure aims to commercialize its robots for industries such as manufacturing and logistics, with a potential consumer rollout on the horizon. Their humanoid models are designed for adaptability across different environments, driven by evolving AI, perception, and mobility technologies.
[[Category:Companies]]
90248f2b4c9b99a1f4a8473a47fd0c6fd227a1de
Humanoid
0
444
1896
2024-11-01T09:49:09Z
Kris
97
Created page with "== About == Humanoid is the first AI and robotics company in the UK creating the world’s leading, commercially scalable, and safe humanoid robots. Humanoid's founder is Arte..."
wikitext
text/x-wiki
== About ==
Humanoid is the first AI and robotics company in the UK creating the world’s leading, commercially scalable, and safe humanoid robots.
Humanoid's founder is Artem Sokolov, investor and serial entrepreneur.
== Description ==
The company’s general-purpose humanoid robots are developed to be flexible and capable of adapting to complex environments. They will expand the market for industrial automation, including use cases in manufacturing, warehouses and logistics, among others.
0fd50b8499c06b40dce751d58ef90ced5b76eccf
Humanoid
0
444
1897
1896
2024-11-01T09:54:48Z
Kris
97
wikitext
text/x-wiki
== About ==
[https://thehumanoid.ai/ Humanoid] is the first AI and robotics company in the UK creating the world’s leading, commercially scalable, and safe humanoid robots.<br>
Humanoid's founder is Artem Sokolov, investor and serial entrepreneur.
== Description ==
The company’s general-purpose humanoid robots are developed to be flexible and capable of adapting to complex environments. They will expand the market for industrial automation, including use cases in manufacturing, warehouses and logistics, among others.
3b4f98f5a7ae90ddf30798477c10373665f481eb
Figure 01
0
121
1898
1892
2024-11-09T19:38:54Z
Ben
2
update figure article
wikitext
text/x-wiki
Figure 01 is a humanoid robot developed by [[Figure AI]], a robotics company founded in 2022 by Brett Adcock.
{{infobox robot
| name = Figure 01
| organization = [[Figure AI]]
| height = 167.6 cm
| weight = 60 kg
| payload = 20 kg
| runtime = 5 Hrs
| speed = 1.2 M/s
| video_link = https://www.youtube.com/watch?v=48qL8Jt39Vs
| cost = Unknown
}}
The robot is designed to perform a variety of tasks across industries such as manufacturing, logistics, warehousing, and retail. Standing at 1.6 meters tall and weighing 60 kilograms, Figure 01 is fully electric and boasts a runtime of approximately five hours on a single charge. It is equipped with advanced artificial intelligence systems, including integrations with OpenAI's models, enabling it to process and reason from language, as well as engage in real-time conversations with humans.
=== Development and Features ===
The development of Figure 01 progressed rapidly, with the company unveiling the prototype in October 2023, less than a year after its founding. The robot achieved dynamic walking capabilities shortly thereafter. In early 2024, Figure AI announced a partnership with OpenAI to enhance the robot's AI capabilities, allowing it to perform tasks such as making coffee by observing human actions and engaging in conversations.
=== Applications ===
Figure 01 is intended for deployment in various sectors to address labor shortages and perform tasks that are unsafe or undesirable for humans. In January 2024, Figure AI entered into an agreement with BMW to utilize the robot in automotive manufacturing facilities. Demonstrations have showcased Figure 01 performing tasks like assembling car parts and interacting with humans in real-time.
=== Limitations ===
Despite its advancements, Figure 01 has several limitations:
* '''Task Complexity''': The robot is currently capable of performing structured and repetitive tasks but may struggle with complex or unstructured environments that require advanced problem-solving skills.
* '''Speed and Efficiency''': Demonstrations have shown that Figure 01 operates at a slower pace compared to human workers, which could impact its efficiency in time-sensitive applications.
* '''Physical Capabilities''': While designed to mimic human dexterity, the robot's ability to handle delicate or intricate objects is still under development, limiting its applicability in tasks requiring fine motor skills.
* '''Battery Life''': With a runtime of approximately five hours, Figure 01 requires regular recharging, which could hinder continuous operation in certain industrial settings.
* '''Environmental Adaptability''': The robot's performance in varying environmental conditions, such as extreme temperatures or uneven terrains, has not been extensively tested, potentially limiting its deployment in diverse settings.
These limitations highlight the ongoing challenges in developing general-purpose humanoid robots capable of seamlessly integrating into human-centric environments.
[[Category:Stompy, Expand!]]
[[Category:Robots]]
a4072e8c1663d04388c97c5fe1d60a6949707c3e
Figure AI
0
122
1899
1894
2024-11-09T19:39:37Z
Ben
2
add figure 02
wikitext
text/x-wiki
'''Figure Inc.''' is a robotics company focused on developing autonomous humanoid robots for industrial and potential consumer applications. Founded in 2022 in Silicon Valley, the company aims to address labor shortages by deploying general-purpose robots capable of performing complex tasks. Its current model, Figure 02, incorporates NVIDIA’s AI computing technologies, including NVIDIA H100 GPUs, enabling high-performance vision, dexterity, and conversational AI.
{{infobox company
| name = Figure AI
| country = USA
| website_link = https://www.figure.ai/
| robots = [[Figure 01]], [[Figure 02]]
}}
== History ==
Founded by Brett Adcock, Figure demonstrated rapid early progress in humanoid robotics, leveraging synthetic data from NVIDIA Isaac Sim. The company’s humanoid robots were first deployed in testing environments, such as BMW's production line, to demonstrate their real-world applications.
== Technology and Partnerships ==
Figure’s robots utilize NVIDIA’s robotics stack, with models trained on NVIDIA DGX for large-scale AI capabilities and designed using the NVIDIA Omniverse platform. Despite having limited in-house AI expertise, Figure's collaboration with OpenAI has furthered its AI models’ conversational abilities, marking a unique blend of perception and interaction capabilities. Figure's Series B funding announcement, amounting to $675 million, supports continued research and deployment of these robotic solutions in industrial settings.
== Goals and Future Directions ==
Figure aims to commercialize its robots for industries such as manufacturing and logistics, with a potential consumer rollout on the horizon. Their humanoid models are designed for adaptability across different environments, driven by evolving AI, perception, and mobility technologies.
[[Category:Companies]]
83415b894632ea121e13b80e8d11fd5b0c2ef934
1904
1899
2024-11-09T19:49:33Z
Ben
2
wikitext
text/x-wiki
'''Figure Inc.''' is a robotics company focused on developing autonomous humanoid robots for industrial and potential consumer applications. Founded in 2022 in Silicon Valley, the company aims to address labor shortages by deploying general-purpose robots capable of performing complex tasks. Its current model, [[Figure 02]], incorporates NVIDIA’s AI computing technologies, including NVIDIA H100 GPUs, enabling high-performance vision, dexterity, and conversational AI.
{{infobox company
| name = Figure AI
| country = USA
| website_link = https://www.figure.ai/
| robots = [[Figure 01]], [[Figure 02]]
}}
== History ==
Founded by Brett Adcock, Figure demonstrated rapid early progress in humanoid robotics, leveraging synthetic data from NVIDIA Isaac Sim. The company’s humanoid robots were first deployed in testing environments, such as BMW's production line, to demonstrate their real-world applications.
== Technology and Partnerships ==
Figure’s robots utilize NVIDIA’s robotics stack, with models trained on NVIDIA DGX for large-scale AI capabilities and designed using the NVIDIA Omniverse platform. Despite having limited in-house AI expertise, Figure's collaboration with OpenAI has furthered its AI models’ conversational abilities, marking a unique blend of perception and interaction capabilities. Figure's Series B funding announcement, amounting to $675 million, supports continued research and deployment of these robotic solutions in industrial settings.
== Goals and Future Directions ==
Figure aims to commercialize its robots for industries such as manufacturing and logistics, with a potential consumer rollout on the horizon. Their humanoid models are designed for adaptability across different environments, driven by evolving AI, perception, and mobility technologies.
[[Category:Companies]]
d328b03ed32178ba87676942afaefd3a667fd19f
Main Page
0
1
1900
1895
2024-11-09T19:40:05Z
Ben
2
/* Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
Feel free to join our [https://discord.gg/kscale Discord community] for more real-time discussion!
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable mw-collapsible mw-collapsed"
|+ Resources
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Components ===
{| class="wikitable mw-collapsible mw-collapsed"
|+ Actuators
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|-
| [[Encos]]
|-
| [[Steadywin]]
|-
| [[Elemental Motors]]
|-
| [[Anydrive]]
|-
| [[HEBI]]
|-
| [[ZeroErr]]
|-
| [[CubeMars]]
|-
| [[MjBots]]
|-
| [[Dynamixel]]
|-
| [[LinEngineering]]
|-
| [[Faradyi]]
|}
{| class="wikitable mw-collapsible mw-collapsed"
|+ Inference Boards
|-
| [[Jetson]]
|-
| [[SiMi AI]]
|}
{| class="wikitable mw-collapsible mw-collapsed"
|+ Motor Controllers
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[moteus-n1]]
|-
| [[K-Scale Motor Controller]]
|}
{| class="wikitable mw-collapsible mw-collapsed"
|+ Communication Protocols
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[AGIBot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]], [[Figure 02]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Humanoid]]
|
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
29633789ff1e0a0a81464521b58a49bdeb699940
1905
1900
2024-11-15T20:05:27Z
Ben
2
/* Robots */
wikitext
text/x-wiki
Welcome to the Humanoid Robots wiki!
This is a free resource to learn about humanoid robots.
As you're looking around, if something feels incomplete or out-of-date, please update it so that we can continue providing high-quality information to the community.
Feel free to join our [https://discord.gg/kscale Discord community] for more real-time discussion!
=== Getting Started ===
[[Getting Started with Humanoid Robots]]
{| class="wikitable mw-collapsible mw-collapsed"
|+ Resources
|-
! Name
! Comments
|-
| [[Underactuated Robotics]]
| High-quality open-source course from MIT
|-
| [https://www.youtube.com/watch?v=LiNgr1tz49I&list=PLZnJoM76RM6ItAfZIxJYNKdaR_BobleLY Advanced Robot Dynamics]
| High-quality open-source course from CMU
|-
| [https://www.youtube.com/watch?v=6rUdAOCNXAU&list=PLZnJoM76RM6KugDT9sw5zhAmqKnGeoLRa Optimal Control]
| High-quality open-source course from CMU
|-
| [https://www.epfl.ch/labs/lasa/mit-press-book-learning/#chapter1 Learning for Adaptive and Reactive Robot Control]
| Textbook for graduate-level courses in robotics
|-
| [[Learning algorithms]]
| Resources related with training humanoid models in simulation and real environments
|-
| [[Servo Design]]
| A reference for servos that you can use
|-
| [[:Category:Guides]]
| Category for pages which act as guides
|-
| [[:Category:Courses]]
| Category for pages about useful courses related to robotics
|-
| [[:Category:Electronics]]
| Category for pages about electronics topics
|-
| [[:Category:Hardware]]
| Category for pages relating to hardware
|-
| [[:Category:Firmware]]
| Category for pages relating to firmware
|-
| [[:Category:Software]]
| Category for pages relating to software
|-
| [[:Category:Teleop]]
| Category for pages relating to teleoperation
|-
| [[:Category:Papers]]
| Category for humanoid robotics papers
|-
| [[:Category:Non-humanoid Robots]]
| Category for pages relating to non-humanoid robots
|-
| [[Contributing]]
| How to contribute to the wiki
|}
=== Components ===
{| class="wikitable mw-collapsible mw-collapsed"
|+ Actuators
|-
| [[SPIN Servo]]
|-
| [[DEEP Robotics J60]]
|-
| [[Robstride]]
|-
| [[MyActuator]]
|-
| [[Encos]]
|-
| [[Steadywin]]
|-
| [[Elemental Motors]]
|-
| [[Anydrive]]
|-
| [[HEBI]]
|-
| [[ZeroErr]]
|-
| [[CubeMars]]
|-
| [[MjBots]]
|-
| [[Dynamixel]]
|-
| [[LinEngineering]]
|-
| [[Faradyi]]
|}
{| class="wikitable mw-collapsible mw-collapsed"
|+ Inference Boards
|-
| [[Jetson]]
|-
| [[SiMi AI]]
|}
{| class="wikitable mw-collapsible mw-collapsed"
|+ Motor Controllers
|-
| [[VESCular6]]
|-
| [[ODrive]]
|-
| [[OBot]]
|-
| [[Solo Motor Controller]]
|-
| [[moteus-n1]]
|-
| [[K-Scale Motor Controller]]
|}
{| class="wikitable mw-collapsible mw-collapsed"
|+ Communication Protocols
|-
| [[Controller Area Network (CAN)]]
|-
| [[Inter-Integrated Circuit (I2C)]]
|-
| [[Serial Peripheral Interface (SPI)]]
|-
| [[EtherCAT]]
|}
=== Robots ===
{| class="wikitable"
|-
! Company
! Robots
|-
| [[1X]]
| [[Eve]], [[Neo]]
|-
| [[AGIBot]]
| [[RAISE-A1]]
|-
| [[Agility]]
| [[Cassie]], [[Digit]]
|-
| [[Anthrobotics]]
| [[Anthro]]
|-
| [[Apptronik]]
| [[Valkyrie]], [[Draco]], [[QDH]], [[Apollo]]
|-
| [[AstriBot Corporation]]
| [[Astribot S1]]
|-
| [[Beijing Humanoid Robot Innovation Center]]
| [[Tiangong]]
|-
| [[Boardwalk Robotics]]
| [[Nadia]], [[Alex]]
|-
| [[Booster Robotics]]
| [[BR002]]
|-
| [[Boston Dynamics]]
| [[HD Atlas]], [[Atlas]]
|-
| [[Bot Company]]
|
|-
| [[DATAA Robotics]]
| [[XR4]]
|-
| [[Deep Robotics]]
| [[Wukong-IV]]
|-
| [[MagicLab,_DREAME]]
| [[MagicBot]]
|-
| [[Engineered Arts]]
| [[Ameca]]
|-
| [[FDROBOT]]
| [[T1]]
|-
| [[Figure AI]]
| [[Figure 01]], [[Figure 02]]
|-
| [[Fourier Intelligence]]
| [[GR-1]]
|-
| [[GALBOT]]
| [[GALBOT]]
|-
| [[Haier]]
| [[Kuavo (Kuafu)]]
|-
| [[Honda Robotics]]
| [[ASIMO]]
|-
| [[Humanoid]]
|
|-
| [[Hyperspawn Robotics]]
| [[Shadow-1]]
|-
| [[Instituto Italiano]]
| [[iCub]]
|-
| [[Kawasaki Robotics]]
| [[Kaleido]], [[Friends]]
|-
| [[Kayra.org]]
| [[Kayra]]
|-
| [[Kepler]]
| [[K1]]
|-
| [[K-Scale Labs]]
| [[Stompy]]
|-
| [[Kind Humanoid]]
| [[Mona]]
|-
| [[LASER Robotics]]
| [[HECTOR V2]]
|-
| [[LEJUROBOT]]
| [[Kuavo]]
|-
| [[LimX Dynamics]]
| [[CL-1]]
|-
| [[MenteeBot]]
| [[MenteeBot (Robot)]]
|-
| [[Mirsee Robotics]]
| [[Beomni]], [[Mirsee]]
|-
| [[NASA]]
| [[Valkyrie]], [[Robonaut2]]
|-
| [[NEURA Robotics]]
| [[4NE-1]]
|-
| [[Noetix]]
| [[Dora]]
|-
| [[PAL Robotics]]
| [[Kangaroo]], [[REEM-C]], [[TALOS]]
|-
| [[PaXini]]
| [[Tora]]
|-
| [[POINTBLANK]]
| [[DROPBEAR]]
|-
| [[Pollen Robotics]]
| [[Reachy]]
|-
| [[Proxy]]
|
|-
| [[Rainbow Robotics]]
| [[HUBO]]
|-
| [[Robotera]]
| [[XBot]], [[Starbot]]
|-
| [[Sanctuary]]
| [[Phoenix]]
|-
| [[SoftBank Robotics]]
| [[Pepper]], [[NAO]]
|-
| [[Stanford Robotics Lab]]
| [[OceanOneK]]
|-
| [[SuperDroid Robots]]
| [[Rocky]]
|-
| [[SUPCON]]
| [[Navigator α]]
|-
| [[System Technology Works]]
| [[ZEUS2Q]]
|-
| [[Tesla]]
| [[Optimus]]
|-
| [[THK]]
|
|-
| [[Toyota Research Institute]]
| [[Punyo]], [[T-HR3]]
|-
| [[UBTech]]
| [[Walker X]], [[Panda Robot]], [[Walker S]]
|-
| [[UC Berkeley]]
| [[Berkeley Blue]]
|-
| [[Unitree]]
| [[H1]], [[G1]]
|-
| [[University of Tehran]]
| [[Surena IV]]
|-
| [[Westwood Robotics]]
| [[THEMIS]]
|-
| [[WorkFar]]
| [[WorkFar Syntro]]
|-
| [[Xiaomi]]
| [[CyberOne]]
|-
| [[Xpeng]]
| [[PX5]]
|}
ca5e68b1d8eca95335639f7d11f143984b4c6d58
Figure 02
0
445
1901
2024-11-09T19:45:32Z
Ben
2
Created page with "Figure 02 is a second-generation humanoid robot developed by Figure AI, an AI robotics company based in Sunnyvale, California. Unveiled in August 2024, Figure 02 represents a..."
wikitext
text/x-wiki
Figure 02 is a second-generation humanoid robot developed by Figure AI, an AI robotics company based in Sunnyvale, California.
Unveiled in August 2024, Figure 02 represents a significant advancement in humanoid robotics, combining human-like dexterity with advanced artificial intelligence to perform a wide range of tasks across various industries.<ref name="prnewswire">{{cite web |title=Figure Unveils Figure 02, Its Second-Generation Humanoid, Setting New Standards in AI and Robotics |url=https://www.prnewswire.com/news-releases/figure-unveils-figure-02-its-second-generation-humanoid-setting-new-standards-in-ai-and-robotics-301894889.html |website=PR Newswire |date=August 15, 2024}}</ref>
== Development and Features ==
Building upon its predecessor, Figure 01, the development of Figure 02 involved a comprehensive hardware and software redesign. Key features of Figure 02 include:
* '''Speech-to-Speech Interaction''': Equipped with onboard microphones and speakers, Figure 02 can engage in natural speech-to-speech conversations with humans, utilizing custom AI models developed in collaboration with OpenAI.<ref name="prnewswire" />
* '''Vision-Language Model (VLM)''': The robot incorporates an onboard VLM that enables rapid common-sense visual reasoning, allowing it to interpret and respond to visual inputs effectively.<ref name="prnewswire" />
* '''Enhanced Battery Capacity''': A custom 2.25 kWh battery pack housed in the torso provides over 50% more energy compared to the previous generation, extending operational runtime.<ref name="prnewswire" />
* '''Integrated Wiring''': The design features integrated cabling for power and communication, resulting in concealed wires, increased reliability, and a more streamlined appearance.<ref name="prnewswire" />
* '''Advanced Vision System''': Six onboard RGB cameras power an AI-driven vision system, enabling the robot to perceive and understand its environment with high accuracy.<ref name="prnewswire" />
* '''Fourth-Generation Hands''': The robot's hands are equipped with 16 degrees of freedom and human-equivalent strength, facilitating the execution of a wide range of human-like tasks.<ref name="prnewswire" />
* '''Increased Computational Power''': With triple the computational and AI inference capabilities of its predecessor, Figure 02 can perform real-world AI tasks fully autonomously.<ref name="prnewswire" />
== Applications ==
Figure 02 is designed for deployment across various sectors, including manufacturing, logistics, warehousing, and retail. In recent tests at BMW Manufacturing's facility in Spartanburg, South Carolina, the robot successfully performed AI data collection and use case training, demonstrating its potential to enhance productivity and efficiency in industrial settings.<ref name="prnewswire" />
== Limitations ==
Despite its advancements, Figure 02 has certain limitations:
* '''Environmental Adaptability''': The robot's performance in diverse and unstructured environments requires further validation to ensure reliability across various settings.
* '''Task Complexity''': While capable of performing a wide range of tasks, Figure 02 may encounter challenges with highly complex or nuanced activities that demand advanced problem-solving skills.
* '''Battery Life''': Although the battery capacity has been enhanced, the operational runtime may still be insufficient for continuous use in certain industrial applications without periodic recharging.
* '''Physical Capabilities''': The robot's ability to handle extremely delicate or intricate objects is still under development, which may limit its applicability in tasks requiring fine motor skills.
Addressing these limitations is essential for the broader adoption and integration of Figure 02 into various industries.
== See Also ==
* [https://www.figure.ai/ Figure AI]
* [https://openai.com/ OpenAI]
* [https://en.wikipedia.org/wiki/Humanoid_robot Humanoid Robot]
== References ==
<references />
== External Links ==
* [https://www.figure.ai/ Figure AI Official Website]
* [https://www.youtube.com/@figureai Figure AI YouTube Channel]
2d7999f719284d7c7d8ad33cba91a3e7bba56c69
1902
1901
2024-11-09T19:47:45Z
Ben
2
wikitext
text/x-wiki
Figure 02 is a second-generation humanoid robot developed by Figure AI, an AI robotics company based in Sunnyvale, California.
{{infobox robot
| name = Figure 02
| organization = Figure AI
| video = [https://www.youtube.com/watch?v=7bFsxW_8ABE Video]
| cost = Not publicly disclosed
| height = 5 ft 6 in (167.6 cm)
| weight = 155 lbs (70 kg)
| speed = 1.2 m/s
| lift force = 44 lbs (20 kg)
| battery life = 5 hours
| battery capacity = 2.25 kWh
| purchase = Not available for purchase
| number made = Limited prototypes
| dof = 16 in hands
| status = In development
}}
Unveiled in August 2024, Figure 02 represents a significant advancement in humanoid robotics, combining human-like dexterity with advanced artificial intelligence to perform a wide range of tasks across various industries.<ref name="prnewswire">{{cite web |title=Figure Unveils Figure 02, Its Second-Generation Humanoid, Setting New Standards in AI and Robotics |url=https://www.prnewswire.com/news-releases/figure-unveils-figure-02-its-second-generation-humanoid-setting-new-standards-in-ai-and-robotics-301894889.html |website=PR Newswire |date=August 15, 2024}}</ref>
== Development and Features ==
Building upon its predecessor, Figure 01, the development of Figure 02 involved a comprehensive hardware and software redesign. Key features of Figure 02 include:
* '''Speech-to-Speech Interaction''': Equipped with onboard microphones and speakers, Figure 02 can engage in natural speech-to-speech conversations with humans, utilizing custom AI models developed in collaboration with OpenAI.<ref name="prnewswire" />
* '''Vision-Language Model (VLM)''': The robot incorporates an onboard VLM that enables rapid common-sense visual reasoning, allowing it to interpret and respond to visual inputs effectively.<ref name="prnewswire" />
* '''Enhanced Battery Capacity''': A custom 2.25 kWh battery pack housed in the torso provides over 50% more energy compared to the previous generation, extending operational runtime.<ref name="prnewswire" />
* '''Integrated Wiring''': The design features integrated cabling for power and communication, resulting in concealed wires, increased reliability, and a more streamlined appearance.<ref name="prnewswire" />
* '''Advanced Vision System''': Six onboard RGB cameras power an AI-driven vision system, enabling the robot to perceive and understand its environment with high accuracy.<ref name="prnewswire" />
* '''Fourth-Generation Hands''': The robot's hands are equipped with 16 degrees of freedom and human-equivalent strength, facilitating the execution of a wide range of human-like tasks.<ref name="prnewswire" />
* '''Increased Computational Power''': With triple the computational and AI inference capabilities of its predecessor, Figure 02 can perform real-world AI tasks fully autonomously.<ref name="prnewswire" />
== Applications ==
Figure 02 is designed for deployment across various sectors, including manufacturing, logistics, warehousing, and retail. In recent tests at BMW Manufacturing's facility in Spartanburg, South Carolina, the robot successfully performed AI data collection and use case training, demonstrating its potential to enhance productivity and efficiency in industrial settings.<ref name="prnewswire" />
== Limitations ==
Despite its advancements, Figure 02 has certain limitations:
* '''Environmental Adaptability''': The robot's performance in diverse and unstructured environments requires further validation to ensure reliability across various settings.
* '''Task Complexity''': While capable of performing a wide range of tasks, Figure 02 may encounter challenges with highly complex or nuanced activities that demand advanced problem-solving skills.
* '''Battery Life''': Although the battery capacity has been enhanced, the operational runtime may still be insufficient for continuous use in certain industrial applications without periodic recharging.
* '''Physical Capabilities''': The robot's ability to handle extremely delicate or intricate objects is still under development, which may limit its applicability in tasks requiring fine motor skills.
Addressing these limitations is essential for the broader adoption and integration of Figure 02 into various industries.
== See Also ==
* [https://www.figure.ai/ Figure AI]
* [https://openai.com/ OpenAI]
* [https://en.wikipedia.org/wiki/Humanoid_robot Humanoid Robot]
== References ==
<references />
== External Links ==
* [https://www.figure.ai/ Figure AI Official Website]
* [https://www.youtube.com/@figureai Figure AI YouTube Channel]
cf26614e9f70fe5646d489b9082a024a7cdf8c77
1903
1902
2024-11-09T19:49:08Z
Ben
2
wikitext
text/x-wiki
Figure 02 is a second-generation humanoid robot developed by [[Figure AI]], an AI robotics company based in Sunnyvale, California.
{{infobox robot
| name = Figure 02
| organization = Figure AI
| video = [https://www.youtube.com/watch?v=7bFsxW_8ABE Video]
| cost = Not publicly disclosed
| height = 5 ft 6 in (167.6 cm)
| weight = 155 lbs (70 kg)
| speed = 1.2 m/s
| lift force = 44 lbs (20 kg)
| battery life = 5 hours
| battery capacity = 2.25 kWh
| purchase = Not available for purchase
| number made = Limited prototypes
| dof = 16 in hands
| status = In development
}}
Unveiled in August 2024, Figure 02 represents a significant advancement in humanoid robotics, combining human-like dexterity with advanced artificial intelligence to perform a wide range of tasks across various industries.<ref name="prnewswire">{{cite web |title=Figure Unveils Figure 02, Its Second-Generation Humanoid, Setting New Standards in AI and Robotics |url=https://www.prnewswire.com/news-releases/figure-unveils-figure-02-its-second-generation-humanoid-setting-new-standards-in-ai-and-robotics-301894889.html |website=PR Newswire |date=August 15, 2024}}</ref>
== Development and Features ==
Building upon its predecessor, Figure 01, the development of Figure 02 involved a comprehensive hardware and software redesign. Key features of Figure 02 include:
* '''Speech-to-Speech Interaction''': Equipped with onboard microphones and speakers, Figure 02 can engage in natural speech-to-speech conversations with humans, utilizing custom AI models developed in collaboration with OpenAI.<ref name="prnewswire" />
* '''Vision-Language Model (VLM)''': The robot incorporates an onboard VLM that enables rapid common-sense visual reasoning, allowing it to interpret and respond to visual inputs effectively.<ref name="prnewswire" />
* '''Enhanced Battery Capacity''': A custom 2.25 kWh battery pack housed in the torso provides over 50% more energy compared to the previous generation, extending operational runtime.<ref name="prnewswire" />
* '''Integrated Wiring''': The design features integrated cabling for power and communication, resulting in concealed wires, increased reliability, and a more streamlined appearance.<ref name="prnewswire" />
* '''Advanced Vision System''': Six onboard RGB cameras power an AI-driven vision system, enabling the robot to perceive and understand its environment with high accuracy.<ref name="prnewswire" />
* '''Fourth-Generation Hands''': The robot's hands are equipped with 16 degrees of freedom and human-equivalent strength, facilitating the execution of a wide range of human-like tasks.<ref name="prnewswire" />
* '''Increased Computational Power''': With triple the computational and AI inference capabilities of its predecessor, Figure 02 can perform real-world AI tasks fully autonomously.<ref name="prnewswire" />
== Applications ==
Figure 02 is designed for deployment across various sectors, including manufacturing, logistics, warehousing, and retail. In recent tests at BMW Manufacturing's facility in Spartanburg, South Carolina, the robot successfully performed AI data collection and use case training, demonstrating its potential to enhance productivity and efficiency in industrial settings.<ref name="prnewswire" />
== Limitations ==
Despite its advancements, Figure 02 has certain limitations:
* '''Environmental Adaptability''': The robot's performance in diverse and unstructured environments requires further validation to ensure reliability across various settings.
* '''Task Complexity''': While capable of performing a wide range of tasks, Figure 02 may encounter challenges with highly complex or nuanced activities that demand advanced problem-solving skills.
* '''Battery Life''': Although the battery capacity has been enhanced, the operational runtime may still be insufficient for continuous use in certain industrial applications without periodic recharging.
* '''Physical Capabilities''': The robot's ability to handle extremely delicate or intricate objects is still under development, which may limit its applicability in tasks requiring fine motor skills.
Addressing these limitations is essential for the broader adoption and integration of Figure 02 into various industries.
== See Also ==
* [https://www.figure.ai/ Figure AI]
* [https://openai.com/ OpenAI]
* [https://en.wikipedia.org/wiki/Humanoid_robot Humanoid Robot]
== References ==
<references />
== External Links ==
* [https://www.figure.ai/ Figure AI Official Website]
* [https://www.youtube.com/@figureai Figure AI YouTube Channel]
325f68987e2859e083fe8889383c623c44c52fdd
Bot Company
0
446
1906
2024-11-15T20:05:43Z
Ben
2
Created page with "[https://www.bot.co/ Bot Company] is a company founded in 2024 by Kyle Vogt, the former founder of Cruise."
wikitext
text/x-wiki
[https://www.bot.co/ Bot Company] is a company founded in 2024 by Kyle Vogt, the former founder of Cruise.
6bf8161b4ddf273617feca757bf142a8e9e2faaf
File:Aesthetic A.png
6
447
1907
2024-11-22T06:20:53Z
Ben
2
wikitext
text/x-wiki
Aesthetic A
b291e15610c6e6c3833d01bc1d8352383708c5ee
File:Aesthetic B.png
6
448
1908
2024-11-22T06:21:08Z
Ben
2
wikitext
text/x-wiki
Aesthetic B
41e50c3d7ec3c36ca5fc0d0171305cd222e7ed67
File:Aesthetic C.png
6
449
1909
2024-11-22T06:22:03Z
Ben
2
wikitext
text/x-wiki
Aesthetic C
b0cdc5dd34f47f3f0ce9a1aa791139758ad57e3d
File:Aesthetic D.png
6
450
1910
2024-11-22T06:22:16Z
Ben
2
wikitext
text/x-wiki
Aesthetic D
f32dc1454233b3bec074d5ecac3a660185a6db63
File:Aesthetic E.png
6
451
1911
2024-11-22T06:22:31Z
Ben
2
wikitext
text/x-wiki
Aesthetic E
57b101b6d460c80b5fe7789b2b065ffebb7d8af8
File:Aesthetic F.png
6
452
1912
2024-11-22T06:22:45Z
Ben
2
wikitext
text/x-wiki
Aesthetic F
e34cf1b551eedc0b7e3675ec47d4c6f755595d7c
K-Scale Labs
0
5
1913
842
2024-11-22T06:25:24Z
Ben
2
/* Logos */
wikitext
text/x-wiki
[[File:Logo.png|right|200px|thumb|The K-Scale Labs logo.]]
[https://kscale.dev/ K-Scale Labs] is building an open-source humanoid robot called [[Stompy]].
{{infobox company
| name = K-Scale Labs
| country = United States
| website_link = https://kscale.dev/
| robots = [[Stompy]]
}}
== Logos ==
Here are some other versions of the K-Scale Labs logo.
<gallery>
Logo.png
K-Scale Labs Raw Logo.png
K-Scale Padded Logo.png
K-Scale Raw Padded Logo.png
KScale Raw Transparent Padded Logo.png
K-Scale Raw Padded White Logo.png
Aesthetic A.png
Aesthetic B.png
Aesthetic C.png
Aesthetic D.png
Aesthetic E.png
Aesthetic F.png
</gallery>
=== Adding Padding ===
Here's a helpful command to add padding to an image using ImageMagick:
<syntaxhighlight lang="bash">
convert \
-channel RGB \
-negate \
-background black \
-alpha remove \
-alpha off \
-gravity center \
-scale 384x384 \
-extent 512x512 \
logo.png \
logo_padded.png
</syntaxhighlight>
[[Category:Companies]]
[[Category:K-Scale]]
cfaaf2cbaee558d22c2cd70c62082bbc3bb87051
1915
1913
2024-11-23T22:56:07Z
Ben
2
wikitext
text/x-wiki
[[File:Logo.png|right|200px|thumb|The K-Scale Labs logo.]]
[https://kscale.dev/ K-Scale Labs] is building an open-source humanoid robot called [[Stompy]].
{{infobox company
| name = K-Scale Labs
| country = United States
| website_link = https://kscale.dev/
| robots = [[Stompy]]
}}
== Logos ==
Here are some other versions of the K-Scale Labs logo.
<gallery>
Logo.png
K-Scale Labs Raw Logo.png
K-Scale Padded Logo.png
K-Scale Raw Padded Logo.png
KScale Raw Transparent Padded Logo.png
K-Scale Raw Padded White Logo.png
Aesthetic A.png
Aesthetic B.png
Aesthetic C.png
Aesthetic D.png
Aesthetic E.png
Aesthetic F.png
Aesthetic G.png
</gallery>
=== Adding Padding ===
Here's a helpful command to add padding to an image using ImageMagick:
<syntaxhighlight lang="bash">
convert \
-channel RGB \
-negate \
-background black \
-alpha remove \
-alpha off \
-gravity center \
-scale 384x384 \
-extent 512x512 \
logo.png \
logo_padded.png
</syntaxhighlight>
[[Category:Companies]]
[[Category:K-Scale]]
cabcaf24ca61b4c89ac2d4e8ce3dfad66eaeaa18
File:Aesthetic G.png
6
453
1914
2024-11-23T22:53:59Z
Ben
2
wikitext
text/x-wiki
Aesthetic G
015616930e92eeff7fc9982c21c00c4bbf8c1b7a